-
Notifications
You must be signed in to change notification settings - Fork 143
Description
LLMs, Machine Learning and Data Leakage Prevention
LLMs appear to be very capable (with caveats, no doubt) at detecting anomalous text, data and information used in some given context. A couple of experiments will quickly prove their capability. With the help of Perplexity AI, I have run the following experiments:
Experiment 1, Data Exfiltration
Inserting a price
property into a package.json
and asking Perplexity AI to identify the anomalous property.

Experiment 2, Treasure Island
Inserting a phrase about social media in the second paragraph of the first chapter of Treasure Island by Robert Louis Stevenson.
Input 🔽

Output 📤

Models of usage ♻️
A couple of "finger in the air" models for implementation:
- Converting company policies around code creation and open source into prompts asked of the code before it is accepted for push or publication; results shared as informational
- Implementing a prompt for internal agents to ask further or more expansive questions of the code set; different coding ecosystems and languages introduce nuances that people can bring expertise too
- Use the basis of inputs and outputs, as confirmed with human support, to re-train and strengthen a given model
Feedback 💬
On reflection, this seems extremely useful for our mission of reducing enterprise data leakage for code creation and contribution. Whilst I am not per se advocating for full automation with an LLM, it could certainly be additive and informational when a given agent is assessing potential leakage in a given snippet of code or an entire codebase.
I'd be keen to hear the opinion of others and reflections of whether this is "just another AI idea (JAAII)" or something that is practical, implementable and helpful to us. I am aware that accuracy of some model is no doubt a looming issue that would need to be addressed. Hallucinations and false positives come with a high risk in this scenario (i.e. leakage) - I would be curious to hear any feedback on how this could be mitigated or addressed.
If you think you can present experiments that better demonstrate the power of an LLM in detecting and preventing data or IP leakage in code, feel free to share ❤️