Google is advising third-party Android app developers to use GenAI features responsibly.
The new guidelines from the search and advertising behemoth are an attempt to tackle harmful content, such as sexual content and hate speech, made using such technologies.
To that end, apps that utilize AI to generate content must avoid creating Restricted Content, provide a means for users to complain or flag offensive information, and market their app in a way that appropriately depicts its capabilities. App developers are also advised to thoroughly test their AI models to ensure user safety and privacy.
"Be sure to test your apps across various user scenarios and safeguard them against prompts that could manipulate your generative AI feature to create harmful or offensive content," warned Prabhat Sharma, Android, and Chrome, director of trust and safety for Google Play.
The development follows 404 Media's previous research, which discovered multiple apps on the Apple App Store and Google Play Store that promoted the potential to make non-consensual nude photographs.
Meta's usage of public data for artificial intelligence raises issues
The growing use of AI technology in recent years has also led to greater privacy and security concerns connected to training data and model safety, allowing malicious actors to extract sensitive information and tamper with the underlying models to produce unexpected results.
Furthermore, Meta's decision to use publicly available information across its products and services to help improve its AI offerings and have the "world's best recommendation technology" has prompted Austrian privacy organization noyb to file a complaint in 11 European countries alleging a violation of GDPR privacy laws in the region.
"This information includes things like public posts, public photos, and their captions," the business stated late last month. "In the future, we may also use the information people share when interacting with our generative AI features, like Meta AI, or with a business, to develop and improve our AI products."
Noyb has accused Meta of transferring the burden onto users (i.e., making it opt-out rather than opt-in) and failing to offer enough information about how the firm intends to handle consumer data.
Meta, for its part, has stated that it will be "relying on the legal basis of 'Legitimate Interests' for processing certain first and third-party data in the European Region and the United Kingdom" to develop AI and build better experiences. European Union users have until June 26 to opt out of the processing, which they can do by submitting a request.
While the social media behemoth emphasized that the technique is consistent with how other digital companies in Europe are creating and upgrading their AI products, the Norwegian data protection regulator Datatilsynet expressed "doubtful" about the process's legality.
"In our view, the most natural thing would have been to ask users for consent before their posts and photos are used in this way," the agency stated in a statement.
"The European Court of Justice has already ruled that Meta cannot claim a 'legitimate interest' to bypass users' data protection rights for advertising purposes," said Max Schrems from noyb. "However, the company is attempting to use the same justification for the training of unspecified 'AI technology.'"
Microsoft's Recall Faces More Scrutiny
Meta's latest regulatory snafu comes at a time when Microsoft's own AI-powered feature, Recall, has received widespread criticism for the privacy and security risks that could arise as a result of capturing screenshots of users' Windows PC activities every five seconds and turning them into a searchable database.
In a new analysis, security researcher Kevin Beaumont discovered that a bad actor can deploy an information stealer and exfiltrate the database that contains the information extracted from screenshots. The only need for pulling this off is that viewing the data requires administrator rights on the user's workstation.
"Recall enables threat actors to automate scraping everything you've ever looked at within seconds," Beaumont stated. "Microsoft should recall. Recall and rebuild it so that it becomes the feature it deserves, which will be supplied later."
Other researchers have developed techniques such as TotalRecall, which render Recall vulnerable to exploitation and extract highly sensitive information from the database. "Windows Recall keeps all your data in an unencrypted SQLite database on your computer, and the screenshots are just saved in a folder," explained Alexander Hagenah, the developer of TotalRecall.
As of June 6, 2024, TotalRecall has been upgraded to no longer require admin permissions utilizing one of the two methods that security researcher James Forshaw disclosed to bypass the administrator privilege requirement to access the Recall data.
"It's only protected through being [access control list]'ed to SYSTEM and so any privilege escalation (or non-security boundary *cough*) is sufficient to leak the information," Forshaw stated.
The first method involves impersonating a program called AIXHost.exe and obtaining its token, or, better yet, using the current user's privileges to modify the access control lists and get access to the entire database.
However, it's worth noting that Recall is now in preview, and Microsoft can still make modifications to the software before it becomes widely available to all users later this month. It is planned to be activated by default on compatible Copilot+ computers.