By: PR Fueled
AI-powered browser extensions are proliferating, offering services that range from summarizing web pages and assisting with writing tasks to converting plain text into functional code.
However, the convenience these tools provide comes with significant security risks. Some extensions are straightforward malware designed to steal data, and others are poorly secured projects with hastily written privacy policies. Even well-known brands like Google can have AI experiments that pose security concerns.
Malware Posing as AI Browser Extensions
The most blatant security risk of AI browser extensions is that some of them are simply malware in disguise. On March 8, Guardio reported that a Chrome extension called “Quick Access to Chat GPT” was hijacking users’ Facebook accounts and stealing all cookies stored on their browsers, including security and session tokens. This extension, which had been in the Chrome store for only a week, was downloaded by over 2000 users per day.
Although Google removed this extension after the report, the problem persists, as new malicious extensions will undoubtedly continue to appear. Major tech platforms like Google and Facebook seem unwilling to effectively ban this malicious extension, which can be labeled as a disgrace. According to Kolide, this extension should have triggered alarms for both Google and Facebook, but they did nothing. So, these are some of the uncommon challenges when using enterprise browsers.
The Hidden Dangers of AI Browser Extensions
Despite their utility, AI browser extensions are not entirely free from security risks. Many companies lack policies to assess these risks, leaving users vulnerable. This lack of oversight means that users can freely install these extensions and potentially expose sensitive data to malicious actors.
Security Risks of Legitimate AI-Powered Browser Extensions
When discussing the security risks of legitimate AI browser extensions, things often get tricky and inevitably controversial. Here are a few of the potential security issues:
-
Sensitive Data Exposure
Data shared with a generative AI tool could be incorporated into its training data and viewed by other users. For instance, a business manager who uses an AI-powered browser extension to enhance a strategy report is likely to receive a detailed and accurate answer from the AI the next day when a competitor asks about the company’s strategy.
The fears of compromise have led companies like Verizon, Amazon, and Apple to ban or severely restrict the use of generative AI. Apple, for example, prohibited the use of AI because of concerns that its employees might leak confidential project information into the system.
-
Data Breaches
Even reputable AI companies can experience data breaches. In March this year, OpenAI announced a bug that allowed some users to see titles from another active user’s chat history and personal information such as name, email address, and payment address.
How vulnerable these extensions are to breaches depends on how much user data they retain, a subject on which many “respectable” extensions remain frustratingly vague.
-
Copyright and Plagiarism Issues
The fact that AI-generated content can closely resemble existing human-created content has started raising legal questions about copyright infringement. This is particularly concerning with tools like GitHub Copilot, which can generate buggy code that replicates known security flaws. These issues are so severe that Stack Overflow’s volunteer moderators went on strike to protest the platform’s decision to allow AI-generated content, fearing the spread of incorrect information and plagiarism.
Despite AI developers’ reasonable faith efforts to mitigate these risks, it remains challenging to separate the good actors from the bad. Even widely used extensions like Fireflies, which transcribe meetings and videos, have terms of service that place the responsibility on users to ensure their content doesn’t violate any rules and only promise to take “reasonable means to preserve the privacy and security of such data.”
How to Choose the Best Browsing Security
According to Kolide, When choosing browsing security, Users should consider the specific features they desire. If you want ad-blocking, choose extensions that specialize in it. If you wish to make personalized recommendations, opt for AI extensions that match the content to your tastes.
AI browser extensions often pose as helpful tools, making it challenging to differentiate between the good, the bad, and the ugly. Their AI capabilities enable them to learn and adapt, which means even initially benign extensions might turn rogue over time. However, a security solution tool like LayerX Security can help organizations solve this problem.
Here are a few tips to stay alert while choosing your browsing companion:
-
Research and Reviews
Read reviews and conduct research to learn about the extension’s reputation and effectiveness.
-
Check Developer Credentials
Ensure the extension is developed by a reputable company or individual with a history of trustworthiness.
-
Avoid Shady Downloads
Download extensions from official websites or trusted app stores to prevent malware-infected versions.
-
Look for Frequent Updates
A good extension should receive regular updates to address new threats and vulnerabilities.
-
User Recommendations
Seek recommendations from trusted sources or online communities to find extensions that others have vetted.
What AI Policies Should I Have for Employees?
For companies to thrive in the era of Generative AI, they must set clear guidelines for how employees can use AI tools. So, if you’re in charge of dictating your company’s AI policies, here are a few best practices:
-
Education
As always, education comes first. Most employees are unaware of the security risks posed by AI tools, so it is vital to educate your workforce about these risks associated with sharing and downloading files from the web. They can also be educated on how to assess malicious versus legitimate products.
-
Allow listing
Even with the education on the ground, employees might care about reading an app’s extended data privacy policy before downloading it. So, a possible solution is to allow extensions to be listed on a case-by-case basis. This way, companies can ensure that only safe tools are used. This approach is preferable to outright bans, which can lead employees to seek unauthorized tools, creating shadow IT problems.
-
Visibility and Zero Trust Access
Companies need help quickly to ascertain which is a risky AI-based extension. So, all IT teams need to query the company’s fleet to detect extensions, and devices with dangerous extensions should be automatically blocked from accessing company resources.
Conclusion
AI-powered browser extensions offer impressive capabilities, but they come with substantial security risks. Malware can disguise itself as a helpful tool to breach data across an entire business system. So, it is vital to conduct thorough research, check developer credentials, and implement clear policies. By doing this, both users and organizations can mitigate these risks.
Companies like LayerX Security are constantly developing different strategies to mitigate these risks, but the ultimate responsibility still lies with the Internal IT team to ensure safe browsing practices. By staying informed and cautious, we can enjoy the benefits of AI extensions while minimizing their dangers.
Published by: Martin De Juan