Hello and welcome to Eye on AI.
Today, I’m bringing you an exclusive look at new research that shows how generative AI is rapidly increasing the number of fake reviews online, deceiving users into downloading apps infected with malware and also deceiving advertisers into placing ads on malicious apps.
The fraud analytics team at DoubleVerify—a company that provides tools and research to advertisers, marketplaces, and publishers to help them detect fraud and safeguard their brands—is today releasing research describing how generative AI tools are being used at scale to create fraudulent app reviews faster and easier than ever before. The researchers tell Eye on AI they found tens of thousands of AI-generated fake reviews propping up thousands of malicious apps across major app stores including the iOS store, Google Play store, and app stores on connected TVs.
“[AI] is basically allowing the scale of fake reviews to escalate so much faster,” Gilit Saporta, senior director of fraud analytics at DoubleVerify, told Eye on AI.
Fraudulent reviews are a long-standing issue online, especially on e-commerce platforms such as Amazon. Earlier this month, the FTC finalized rules banning fake reviews and related deceptive practices, such as buying reviews, misrepresenting authentic reviews on a company’s own website, and buying fake social media followers or engagement.
The finalized rules also explicitly ban reviews that are AI-generated, which have been increasingly flooding sites like Amazon, TripAdvisor, and wherever reviews are found since generative AI tools became readily available, according to DoubleVerify. In their new findings, the company’s fraud researchers describe how generative AI is causing the already prevalent problem to explode in app stores specifically. In 2024, the company identified more than three times the number of apps with AI-generated fake reviews compared to the same period in 2023. Some of the reviews contain obvious phrases that point to them being AI-generated (“I am a language model”), but others come across as authentic and would be difficult for users to detect, according to Saporta. Only by analyzing reviews on a massive scale was her team able to spot other subtleties that point to AI generation, such as phrases being repeated over and over again.
The malicious apps being legitimized by AI reviews typically download malware onto users’ devices to harvest data or request intrusive permissions, such as allowing the app to run in the background undetected.
Many of the apps “target the most vulnerable parts of our society,” Saporta said, such as seniors (magnifying glass apps, flashlight apps) and kids (such as ones that promise free coins, gems, and the like in popular kids’ mobile games). Other specific apps DoubleVerify discovered to have significant numbers of AI-generated reviews include a Fire TV app called Wotcho TV, an app called My AI Chatbot in the Google Play Store, and another called Brain E-Books in the Google Play Store.
DoubleVerify also found malicious apps that host audio content are leaning heavily on AI-generated reviews. Advertisers pay a premium for audio ads, so this scheme hinges on making the app seem legitimate to both users and advertisers. Once downloaded, these apps install malware that simulates audio playback or plays audio in the background of a user’s device without their knowledge (draining battery and running up data usage), making it possible for the app’s creator to fraudulently charge advertisers for fake listens.
In some cases, the creators of the malicious apps themselves are using tools like ChatGPT to rapidly generate five-star reviews. In others, they’re outsourcing the task to gig economy workers. One sign to look out for is if an app has something like 90% five-star reviews, 10% one-star reviews, and nothing in between.
“I think for someone who is not coming in with the knowledge that the app has been showing some suspicious patterns, it would be very difficult to find the reviews that have been produced by AI,” Saporta said, adding that the app stores are aware of the issue and DoubleVerify is working with them to flag problematic apps.
AI companies tout that generative AI models make writing easier, but it comes at a cost. The explosion of AI-generated reviews is similar to how AI tools have made it easier for hackers to write more convincing phishing emails faster. Educators say students are outsourcing their writing work to ChatGPT, and hiring managers say they’re overwhelmed by floods of low-quality resumes written with AI tools. DoubleVerify is also tracking how malicious actors are using AI to create shell e-commerce websites for companies that don’t really exist.
Technology often aims to lower the barrier to entry, but can it lower it too much?
And with that, here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
AI IN THE NEWS
California lawmakers overwhelmingly approve sweeping AI bill. That’s according to the New York Times. As Jeremy Kahn wrote in Tuesday’s newsletter, the bill—referred to as SB1047—is designed to prevent catastrophic harms caused by AI and has divided the AI community. After being watered down following lobbying from the tech industry, the version of the bill approved yesterday would require AI companies to test the largest AI models for safety prior to release and would allow the state’s attorney general to sue model developers for serious harms caused by their technologies. If signed into law, the bill would likely become de facto regulation for the U.S. and would mark a major shift for AI regulation in the country, which still lacks national regulation on AI. For more information on the bill, Fortune’s Jenn Brice’s has a helpful explainer.
Nvidia surpasses expectations in record Q2 earnings. As Fortune’s Alexei Oreskovic reported, the chipmaker reported $30 billion in the three months ending on July 28, up 122% from last year and far above Wall Street’s already bullish prediction of $28.9 billion. More importantly, Nvidia confirmed customers will begin receiving orders of its next-generation Blackwell AI chip in Q4. Rumors that the chip could be delayed have been swirling, putting extra pressure on yesterday’s earnings.
Uber links up with Wayve for mappless self-driving cars. The companies today announced a partnership and Uber’s strategic investment in Wayve, one of the autonomous driving companies taking a newer “mapless” approach that leans heavily on AI and is designed to allow autonomous vehicles to operate without geofenced limits. With Uber’s support, Wayve intends to accelerate its work with carmakers to enhance consumer vehicles with Level 2+ advanced driver assistance and Level 3 automated driving capabilities. It will also work toward Level 4 autonomous vehicles—referring to the level at which vehicles can fully drive themselves without human intervention in specific conditions—to be deployed by Uber in the future. Uber founder and former CEO Travis Kalanick began publicly speaking about replacing drivers with automated vehicles in 2014, and the company invested more than $1 billion in autonomous tech before ultimately selling off its self-driving unit to Aurora. Last week, I wrote about the role AI plays in self-driving cars (and why self-driving car makers aren’t attaching themselves to the AI hype). You can check out the story here.
New research offers first empirical evidence that LLMs exhibit racialized linguistic stereotypes. Researchers from Stanford University, the University of Oxford, the University of Chicago, and other institutions published the paper yesterday in Nature, which details how AI language models show a particular prejudice against speakers of African American English. Specifically, the researchers describe a discrepancy between what language models overtly say about African Americans and what they covertly associate with them through dialect. They also argue that the current techniques intended to alleviate racial bias in language models (such as human preference alignment) actually exacerbate that discrepancy by obscuring the racism in LLMs. Lastly, they found that the models are more likely to “suggest that speakers of African American English be assigned less prestigious jobs, be convicted of crimes, and be sentenced to death” than speakers of Standard American English. These would be serious judgments to leave in the hands of an LLM and are the type of use cases many lawmakers are looking to regulate. The EU AI Act, for example, names employment and justice processes as “high risk” areas where AI deployment is subject to more stringent guardrails and transparency requirements.
FORTUNE ON AI
Google now uses AI to moderate staff meetings and employees say it asks softball questions —by Marco Quiroz-Gutierrez
Wall Street’s AI darling Super Micro postponed earnings while under short seller’s microscope —by Will Daniel
Meta has abandoned efforts to make custom chips for its upcoming AR glasses —by Kali Hays
Klarna has 1,800 employees it hopes AI will render obsolete —by Ryan Hogg
Why Honeywell has placed such a big bet on gen AI—by John Kell
AI CALENDAR
Sept. 10-11: The AI Conference, San Francisco
Sept. 10-12: AI Hardware and AI Edge Summit, San Jose, Calif.
Sept. 17-19: Dreamforce, San Francisco
Sept. 25-26: Meta Connect in Menlo Park, Calif.
Oct. 22-23: TedAI, San Francisco
Oct. 28-30: Voice & AI, Arlington, Va.
Nov. 19-22: Microsoft Ignite, Chicago, Ill.
Dec. 2-6: AWS re:Invent, Las Vegas, Nev.
Dec. 8-12: Neural Information Processing Systems (Neurips) 2024 in Vancouver, British Columbia
Dec. 9-10: Fortune Brainstorm AI San Francisco (register here)
EYE ON AI NUMBERS
1,914
That’s how many AI companies with B2B business models scored equity investments so far in 2024 (as of Aug. 9), compared to just 214 deals for AI companies with B2C business models, according to CB Insights.
The figure continues a trend (B2B AI deals have far outpaced B2C for at least the past four years, according to the data) and mirrors the same strategy the major AI players are taking. For example, while OpenAI’s ChatGPT is about as consumer as AI can get, the company’s investments and acquisitions are primarily geared toward accelerating its enterprise strategy. It makes sense: Enterprises have more access to the resources needed to experiment with AI.
This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.