Massive Data Exposure in Lovable AI App Builder: Thousands of Projects Compromised
A significant security vulnerability has been identified in Lovable, a widely-used AI-powered application development platform. This flaw has reportedly exposed sensitive data from thousands of projects created before November 2025, including source code, database credentials, AI chat histories, and real customer information.
Understanding the Vulnerability
The core issue lies in a Broken Object Level Authorization (BOLA) vulnerability within Lovable’s API. BOLA vulnerabilities occur when an API fails to properly verify whether a user has the necessary permissions to access specific objects. This oversight allows unauthorized users to retrieve data they shouldn’t have access to. In Lovable’s case, any user with a free-tier account can make unauthenticated API calls to the platform’s backend, accessing project data belonging to other users.
This type of vulnerability is particularly concerning because it is both prevalent and easy to exploit. The Open Web Application Security Project (OWASP) ranks BOLA vulnerabilities as the top issue in their API Security Top 10 list, highlighting the critical need for proper authorization checks in API design.
Discovery and Disclosure
The vulnerability was brought to light by a security researcher known as @weezerOSINT. They discovered that the API endpoint `https://api.lovable.dev/GetProjectMessagesOutputBody` returns full project message histories, AI reasoning logs, and tool-use records without enforcing proper access controls. The exposed data includes user IDs, session content, and internal AI reasoning chains that were never intended for public access.
The researcher reported the issue to Lovable via the HackerOne bug bounty platform approximately 48 days before publicly disclosing it. Despite this, the flaw reportedly remains unpatched for projects created before November 2025. While Lovable has applied a fix for newly created projects, the legacy project base remains exposed, leaving a significant risk for users who built applications on the platform before the cutoff date.
Scope of the Exposure
The extent of the data exposure is alarming. One affected project belongs to Connected Women in AI, a nonprofit organization, and reportedly contains exposed Supabase database credentials alongside real user data. Among the data found were records linked to individuals from Accenture Denmark and Copenhagen Business School.
Beyond nonprofit exposure, employees at major technology firms, including Nvidia, Microsoft, Uber, and Spotify, reportedly have Lovable accounts tied to affected projects. This raises the potential that sensitive corporate development data could be at risk.
Implications for Users
For users who created projects on Lovable before November 2025, the implications are serious. The exposed data could be exploited in various ways, including:
– Intellectual Property Theft: Source code and proprietary algorithms could be stolen and used by competitors or malicious actors.
– Data Breaches: Exposed database credentials could allow unauthorized access to user data, leading to potential data breaches.
– Reputational Damage: Organizations could suffer reputational harm if sensitive customer information is leaked.
– Regulatory Penalties: Depending on the nature of the exposed data, organizations could face penalties for failing to protect user information.
Recommended Actions
Given the severity of the situation, affected users should take immediate action:
1. Rotate Credentials: Immediately change any API keys, database credentials, or other secrets stored within projects created before November 2025.
2. Review Project Data: Assume that chat histories, source code, and other sensitive information associated with older projects may have been accessed. Review and secure this data accordingly.
3. Monitor for Unauthorized Access: Keep an eye on your systems for any signs of unauthorized access or unusual activity.
4. Contact Lovable Support: Reach out to Lovable’s support team for guidance on securing your projects and to inquire about any forthcoming patches or fixes.
Broader Implications for AI Development Platforms
This incident underscores a recurring challenge in AI-native development platforms: security controls often lag behind rapid feature deployment, leaving early adopters most exposed. Organizations building production applications on low-code AI builders should enforce secrets management practices independent of the platform and regularly audit API exposure for any sensitive credentials embedded in project repositories or chat histories.
As AI development platforms continue to grow in popularity, it’s crucial for both providers and users to prioritize security. Implementing robust access controls, regularly auditing for vulnerabilities, and promptly addressing reported issues are essential steps in protecting sensitive data and maintaining user trust.
Conclusion
The Lovable AI app builder’s data exposure serves as a stark reminder of the importance of security in the development and deployment of AI applications. Users must remain vigilant, proactively securing their projects and staying informed about potential vulnerabilities. Meanwhile, platform providers must prioritize security at every stage of development to prevent such incidents and protect their users’ data.