OpenAI’s recent upgrade to ChatGPT’s image generation capabilities has significantly improved its ability to produce images containing text, leading to the creation of highly realistic fake receipts. This advancement has sparked discussions about potential misuse in fraudulent activities.
In March 2025, OpenAI introduced the GPT-4o model, enhancing ChatGPT’s proficiency in generating images with embedded text. Users quickly discovered that this feature could be utilized to create convincing counterfeit receipts. For instance, venture capitalist Deedy Das shared on social media a fabricated receipt for a San Francisco steakhouse, generated using GPT-4o. He highlighted the ease with which such images can be produced, emphasizing the potential for misuse in verification processes that rely on image-based proof.
Other users replicated similar results, adding elements like food or drink stains to increase authenticity. A LinkedIn user in France posted a crumpled AI-generated receipt for a local restaurant chain, demonstrating the model’s capability to produce realistic-looking documents.
TechCrunch conducted tests with GPT-4o and successfully generated a fake receipt for an Applebee’s in San Francisco. However, the generated image contained errors, such as using a comma instead of a period in the total amount and incorrect arithmetic. These discrepancies indicate that while the technology is advanced, it still has limitations that can reveal its artificial nature.
The ability to easily create fake receipts raises significant concerns about potential fraud. Individuals could exploit this technology to fabricate expenses and seek reimbursements for non-existent costs. OpenAI has acknowledged these concerns and stated that all images generated by ChatGPT include metadata indicating their origin. The company also monitors usage to enforce its policies against fraudulent activities.
OpenAI’s spokesperson, Taya Christianson, emphasized the company’s commitment to providing users with creative freedom while preventing misuse. She noted that AI-generated receipts could serve legitimate purposes, such as educational tools for financial literacy or creating original art and product advertisements.
The emergence of AI-generated fake receipts underscores the broader issue of AI’s role in producing deceptive content. In the past, AI systems have been used to create deepfake images and videos, leading to misinformation and fraud. To combat this, companies like OpenAI have implemented measures such as watermarking AI-generated images to indicate their origin. For example, images produced by OpenAI’s DALL-E 3 now contain a watermark from the Coalition for Content Provenance and Authenticity (C2PA), which includes both a visible signature and invisible metadata. This approach aims to help verify the authenticity of digital content and mitigate the spread of misinformation.
Despite these efforts, challenges remain. Watermarks can be removed, either accidentally or intentionally, and many social media platforms strip metadata from uploaded images. Therefore, an image lacking metadata may not necessarily indicate it was AI-generated. OpenAI acknowledges that while metadata is not a foolproof solution, adopting methods for establishing provenance and encouraging users to recognize these signals are crucial steps toward increasing the trustworthiness of digital information.
The development of AI technologies capable of generating realistic images with embedded text presents both opportunities and risks. While they offer creative possibilities and practical applications, they also necessitate robust safeguards to prevent misuse in fraudulent activities. As AI continues to evolve, it is imperative for developers, users, and policymakers to collaborate in establishing ethical guidelines and technological solutions to address these challenges.