Categories
News

Apple Updates App Store Guidelines to Permit Game Emulators, Website Links in EU Music Apps

[ad_1]

Apple today updated its App Store guidelines to comply with an anti-steering mandate levied by the European Commission. Music streaming apps like Spotify are now permitted to include a link or buy button that leads to a website with information about alternative music purchasing options, though this is only permitted in the European Economic Area.

iOS App Store General Feature JoeBlue

Music Streaming Services Entitlements: music streaming apps in specific regions can use Music Streaming Services Entitlements to include a link (which may take the form of a buy button) to the developer’s website that informs users of other ways to purchase digital music content or services. These entitlements also permit music streaming app developers to invite users to provide their email address for the express purpose of sending them a link to the developer’s website to purchase digital music content or services. Learn more about these entitlements.

In accordance with the entitlement agreements, the link may inform users about where and how to purchase those in-app purchase items, and the price of such items. The entitlements are limited to use only in the iOS or iPadOS App Store in specific storefronts. In all other storefronts, streaming music apps and their metadata may not include buttons, external links, or other calls to action that direct customers to purchasing mechanisms other than in-app purchase.

The European Commission in March fined Apple $2 billion for anti-competitive conduct against rival music streaming services. The fine also came with a requirement that Apple “remove the anti-steering provisions” from its App Store rules, which Apple has now done. Apple is restricted from repeating the infringement or adopting similar practices in the future, though it is worth noting that Apple plans to appeal the decision.

Apple has accused Spotify of manipulating the European Commission to get the rules of the ‌App Store‌ rewritten in its favor. “They want to use Apple’s tools and technologies, distribute on the ‌App Store‌, and benefit from the trust we’ve built with users – and pay Apple nothing for it,” Apple complained following the ruling.

In addition to updating its streaming music rules, Apple today also added games from retro game console emulator apps to the list of permitted software allowable under guideline 4.7. Guideline 4.7 permits apps to offer HTML5 mini apps and mini games, streaming games, chatbots, game emulators, and plug-ins.

Apps may offer certain software that is not embedded in the binary, specifically HTML5 mini apps and mini games, streaming games, chatbots, and plug-ins. Additionally, retro game console emulator apps can offer to download games. You are responsible for all such software offered in your app, including ensuring that such software complies with these Guidelines and all applicable laws.

Game emulators have managed to sneak onto the ‌App Store‌ several times over the years by using hidden functionality, but Apple has not explicitly permitted them until now. The rule change that allows for game emulators is worldwide, as is support for apps that offer mini apps and mini games.

[ad_2]

Source Article Link

Categories
Entertainment

The White House lays out extensive AI guidelines for the federal government

[ad_1]

It’s been five months since President Joe Biden signed an executive order (EO) to address the rapid advancements in artificial intelligence. The White House is today taking another step forward in implementing the EO with a policy that aims to regulate the federal government’s use of AI. Safeguards that the agencies must have in place include, among other things, ways to mitigate the risk of algorithmic bias.

“I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its benefits,” Vice President Kamala Harris told reporters on a press call.

Harris announced three binding requirements under a new Office of Management and Budget (OMB) policy. First, agencies will need to ensure that any AI tools they use “do not endanger the rights and safety of the American people.” They have until December 1 to make sure they have in place “concrete safeguards” to make sure that AI systems they’re employing don’t impact Americans’ safety or rights. Otherwise, the agency will have to stop using an AI product unless its leaders can justify that scrapping the system would have an “unacceptable” impact on critical operations.

Impact on Americans’ rights and safety

Per the policy, an AI system is deemed to impact safety if it “is used or expected to be used, in real-world conditions, to control or significantly influence the outcomes of” certain activities and decisions. Those include maintaining election integrity and voting infrastructure; controlling critical safety functions of infrastructure like water systems, emergency services and electrical grids; autonomous vehicles; and operating the physical movements of robots in “a workplace, school, housing, transportation, medical or law enforcement setting.”

Unless they have appropriate safeguards in place or can otherwise justify their use, agencies will also have to ditch AI systems that infringe on the rights of Americans. Purposes that the policy presumes to impact rights defines include predictive policing; social media monitoring for law enforcement; detecting plagiarism in schools; blocking or limiting protected speech; detecting or measuring human emotions and thoughts; pre-employment screening; and “replicating a person’s likeness or voice without express consent.”

When it comes to generative AI, the policy stipulates that agencies should assess potential benefits. They all also need to “establish adequate safeguards and oversight mechanisms that allow generative AI to be used in the agency without posing undue risk.”

Transparency requirements

The second requirement will force agencies to be transparent about the AI systems they’re using. “Today, President Biden and I are requiring that every year, US government agencies publish online a list of their AI systems, an assessment of the risks those systems might pose and how those risks are being managed,” Harris said.

As part of this effort, agencies will need to publish government-owned AI code, models and data, as long as doing so won’t harm the public or government operations. If an agency can’t disclose specific AI use cases for sensitivity reasons, they’ll still have to report metrics

Vice President Kamala Harris delivers remarks during a campaign event with President Joe Biden in Raleigh, N.C., Tuesday, March 26, 2024. (AP Photo/Stephanie Scarbrough)Vice President Kamala Harris delivers remarks during a campaign event with President Joe Biden in Raleigh, N.C., Tuesday, March 26, 2024. (AP Photo/Stephanie Scarbrough)

ASSOCIATED PRESS

Last but not least, federal agencies will need to have internal oversight of their AI use. That includes each department appointing a chief AI officer to oversee all of an agency’s use of AI. “This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris noted. Many agencies will also need to have AI governance boards in place by May 27.

The vice president added that prominent figures from the public and private sectors (including civil rights leaders and computer scientists) helped shape the policy along with business leaders and legal scholars.

The OMB suggests that, by adopting the safeguards, the Transportation Security Administration may have to let airline travelers opt out of facial recognition scans without losing their place in line or face a delay. It also suggests that there should be human oversight over things like AI fraud detection and diagnostics decisions in the federal healthcare system.

As you might imagine, government agencies are already using AI systems in a variety of ways. The National Oceanic and Atmospheric Administration is working on artificial intelligence models to help it more accurately forecast extreme weather, floods and wildfires, while the Federal Aviation Administration is using a system to help manage air traffic in major metropolitan areas to improve travel time.

“AI presents not only risk, but also a tremendous opportunity to improve public services and make progress on societal challenges like addressing climate change, improving public health and advancing equitable economic opportunity,” OMB Director Shalanda Young told reporters. “When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services to improve accuracy and expand access to essential public services.”

This policy is the latest in a string of efforts to regulate the fast-evolving realm of AI. While the European Union has passed a sweeping set of rules for AI use in the bloc, and there are federal bills in the pipeline, efforts to regulate AI in the US have taken more of a patchwork approach at state level. This month, Utah enacted a law to protect consumers from AI fraud. In Tennessee, the Ensuring Likeness Voice and Image Security Act (aka the Elvis Act — seriously) is an attempt to protect musicians from deepfakes i.e. having their voices cloned without permission.

[ad_2]

Source Article Link

Categories
Featured

Google’s new bulk sender guidelines spell trouble for B2B

[ad_1]

Back in October, Google and Yahoo unveiled a pivotal update to their bulk sender guidelines.

Launching February 1, these new regulations, which impact both bulk emailers (those sending over 5,000 emails daily to Gmail accounts) and general Gmail users alike, introduced authentication requirements and defined thresholds for spam complaints. Specifically, they defined a spam complaint threshold of 0.3%.

[ad_2]

Source Article Link

Categories
Life Style

Could AI-designed proteins be weaponized? Scientists lay out safety guidelines

[ad_1]

AlphaFold structure prediction for probable disease resistance protein At1g58602.

The artificial-intelligence tool AlphaFold can design proteins to perform specific functions.Credit: Google DeepMind/EMBL-EBI (CC-BY-4.0)

Could proteins designed by artificial intelligence (AI) ever be used as bioweapons? In the hope of heading off this possibility — as well as the prospect of burdensome government regulation — researchers today launched an initiative calling for the safe and ethical use of protein design.

“The potential benefits of protein design [AI] far exceed the dangers at this point,” says David Baker, a computational biophysicist at the University of Washington in Seattle, who is part of the voluntary initiative. Dozens of other scientists applying AI to biological design have signed the initiative’s list of commitments.

“It’s a good start. I’ll be signing it,” says Mark Dybul, a global health policy specialist at Georgetown University in Washington DC who led a 2023 report on AI and biosecurity for the think tank Helena in Los Angeles, California. But he also thinks that “we need government action and rules, and not just voluntary guidance”.

The initiative comes on the heels of reports from US Congress, think tanks and other organizations exploring the possibility that AI tools — ranging from protein-structure prediction networks such as AlphaFold to large language models such as the one that powers ChatGPT — could make it easier to develop biological weapons, including new toxins or highly transmissible viruses.

Designer-protein dangers

Researchers, including Baker and his colleagues, have been trying to design and make new proteins for decades. But their capacity to do so has exploded in recent years thanks to advances in AI. Endeavours that once took years or were impossible — such as designing a protein that binds to a specified molecule — can now be achieved in minutes. Most of the AI tools that scientists have developed to enable this are freely available.

To take stock of the potential for malevolent use of designer proteins, Baker’s Institute of Protein Design at the University of Washington hosted an AI safety summit in October 2023. “The question was: how, if in any way, should protein design be regulated and what, if any, are the dangers?” says Baker.

The initiative that he and dozens of other scientists in the United States, Europe and Asia are rolling out today calls on the biodesign community to police itself. This includes regularly reviewing the capabilities of AI tools and monitoring research practices. Baker would like to see his field establish an expert committee to review software before it is made widely available and to recommend ‘guardrails’ if necessary.

The initiative also calls for improved screening of DNA synthesis, a key step in translating AI-designed proteins into actual molecules. Currently, many companies providing this service are signed up to an industry group, the International Gene Synthesis Consortium (IGSC), that requires them to screen orders to identify harmful molecules such as toxins or pathogens.

“The best way of defending against AI-generated threats is to have AI models that can detect those threats,” says James Diggans, head of biosecurity at Twist Bioscience, a DNA-synthesis company in South San Francisco, California, and chair of the IGSC.

Risk assessment

Governments are also grappling with the biosecurity risks posed by AI. In October 2023, US President Joe Biden signed an executive order calling for an assessment of such risks and raising the possibility of requiring DNA-synthesis screening for federally funded research.

Baker hopes that government regulation isn’t in the field’s future — he says it could limit the development of drugs, vaccines and materials that AI-designed proteins might yield. Diggans adds that it’s unclear how protein-design tools could be regulated, because of the rapid pace of development. “It’s hard to imagine regulation that would be appropriate one week and still be appropriate the next.”

But David Relman, a microbiologist at Stanford University in California, says that scientist-led efforts are not sufficient to ensure the safe use of AI. “Natural scientists alone cannot represent the interests of the larger public.”

[ad_2]

Source Article Link