En los últimos días, algunos usuarios de iPhone dijeron que la opción “Permitir que las aplicaciones soliciten seguimiento” en la aplicación Configuración de repente quedó atenuada en sus dispositivos. fue el caso Destacado por iDeviceHelpus en XY en A Publicado en Reddit.
La letra pequeña de Apple debajo del interruptor gris dice “Esta configuración no se puede cambiar porque su perfil la restringe, su ID de Apple está administrada, no cumple con los requisitos de edad mínima o le falta información de edad”. Sin embargo, muchos usuarios afectados afirman que ninguno de estos motivos se aplica a ellos.
Si bien algunos usuarios afectados dijeron que el problema comenzó después de que actualizaron sus iPhones a iOS 17.5, que se lanzó a principios de esta semana, otros dijeron que también se vieron afectados por versiones de software más antiguas, incluidas iOS 17.4.1, iOS 17.4 y iOS 16.6. La causa raíz del problema no está clara. Nos comunicamos con Apple para hacer comentarios.
Con varias versiones de iOS afectadas y algunos usuarios diciendo que el problema finalmente se resolvió por sí solo, es probable que se trate de un problema del lado del servidor.
Ubicada en la aplicación Configuración en Privacidad y seguridad → Seguimiento, la configuración “Permitir que las aplicaciones soliciten seguimiento” generalmente permite que las aplicaciones soliciten rastrear su actividad en otras aplicaciones y sitios web cuando está activada. Cuando la configuración está desactivada, todas las solicitudes de seguimiento de aplicaciones nuevas se rechazan automáticamente. La configuración es parte de Apple Seguimiento transparente de aplicaciones La función se introdujo con iOS 14.5 hace unos tres años.
actualizar: Apple dice que ha solucionado un problema que puede haber deshabilitado brevemente la configuración Permitir que las aplicaciones soliciten seguimiento para algunos usuarios de iPhone en iOS 14 y versiones posteriores. Todos los usuarios afectados verán que sus configuraciones previamente especificadas se restaurarán en los próximos días. A los usuarios afectados por la falla se les asignó de forma predeterminada el estado que preserva más la privacidad.
Hay informes alarmantes en Reddit de que la última actualización de iOS 17.5 de Apple ha introducido un error que hace que fotos antiguas que fueron eliminadas, en algunos casos hace años, aparezcan en las bibliotecas de fotos de los usuarios. Después de actualizar su iPhone, un usuario dijo que se sorprendió al descubrir que las fotos antiguas NSFW que eliminaron en 2021 aparecían de repente en fotos marcadas como cargadas recientemente en iCloud. último…
El servicio iMessage que permite a los usuarios de Apple enviarse mensajes entre sí parece estar inactivo para algunos usuarios, ya que los mensajes no salen o tardan mucho en enviarse. Hay varios informes sobre el problema en las redes sociales y un aumento significativo en los informes de interrupciones en Down Detector, pero la página de estado del sistema de Apple aún no ha informado de una interrupción. Actualización: la página de estado de Apple dice…
Apple presentó hoy una vista previa de las nuevas funciones de accesibilidad que llegarán con iOS 18 a finales de este año, y eso incluye algunas opciones nuevas para CarPlay. Apple destacó tres nuevas funciones que llegarán a CarPlay: Control por voz: esta función permitirá a los usuarios navegar por CarPlay y controlar aplicaciones solo con su voz. Filtros de color: esta función hará que la interfaz de CarPlay sea visualmente más fácil de usar…
Hoy es el día de lanzamiento oficial de los nuevos modelos de iPad Pro, y estas tabletas actualizadas representan la mayor actualización de características y diseño que hemos visto en el iPad Pro en varios años. Elegimos uno de los nuevos modelos de 13 pulgadas para ver las novedades. Suscríbase al canal de YouTube MacRumors para ver más videos. Cuando se trata de diseño, Apple todavía ofrece opciones de 11 y 13 pulgadas…
Se espera que el iPhone 16 Pro Max que llegará este año aumente su tamaño total de 6,7 pulgadas a 6,9 pulgadas, y la nueva imagen nos da una buena idea de cómo se compara el iPhone 15 Pro Max actual con el que podría ser el más grande de Apple. alguna vez iPhone. La imagen de arriba, publicada en X por ZONEofTECH, muestra una maqueta del iPhone 16 Pro Max junto con el iPhone 15 Pro Max real. estúpido…
En abril, Apple actualizó sus pautas para permitir emuladores de juegos más antiguos en la App Store y ya se han lanzado varios emuladores populares. Los emuladores lanzados hasta ahora permiten a los usuarios de iPhone jugar juegos lanzados para consolas más antiguas de Nintendo, Sony, SEGA, Atari y otras. Aquí hay una lista de algunos de los emuladores populares disponibles en la App Store a partir de ahora. Delta Delta fue liberado…
Windows 10 has received a new optional update and it comes with some much-needed fixing to cure problems some users have been experiencing with the search function in the OS.
Microsoft tells us that: “This update makes some changes to Windows Search. It is now more reliable, and it is easier to find an app after you install it. This update also gives you a personalized app search experience.”
As Windows Latest describes, for some Windows 10 users, search has become a somewhat hit or miss affair particularly around trying to quickly fire up an app. Such as, for example, searching for the ‘Recycle Bin’ and not getting the icon for that returned, but other functions instead.
On social media, there have been a number of reports about wonky search experiences, too, such as this one on Reddit where Windows 10 refused to find a commonly-used app.
In more extreme cases, search is locking up and crashing, which is the pinnacle of irritation for this part of the UI.
Analysis: Wait a little longer
Hopefully, this kind of behavior should be a thing of the past when this update is applied. However, note that this is just an optional update at this point, so it’s officially still in testing – meaning there’s a slight chance the fix may not be fully working. Or that the KB5036979 update might cause unwelcome side-effects elsewhere in Windows 10 (it wouldn’t be the first time, certainly).
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
The safest bet is to wait it out, let early adopters test this preview update, and install the finished cumulative update when it arrives in May (on Patch Tuesday, which will be May 14).
At least we know this piece of smoothing over is now incoming, so those who’ve been frustrated with iffy search results now know that – with any luck – their woes should soon be over. Or at least, they’ll face spanners in the search works with less regularity.
Elsewhere with this update, Microsoft has also improved the reliability of widgets on the lock screen, with a more “customized experience” and more visuals available, so these should be better all-round, too.
The downside with KB5036979? That’s a new initiative to introduce notifications about your Microsoft Account in the Start menu and Settings app, which will doubtless consist of various prompts to sign up for an account, or to finish that process.
Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.
Credit: Juan Gaertner/Science Photo Library
For the first time, an AI system has helped researchers to design completely new antibodies. An algorithm similar to those of the image-generating tools Midjourney and DALL·E has churned out thousands of new antibodies that recognize certain bacterial, viral or cancer-related targets. Although in laboratory tests only about one in 100 designs worked as hoped, biochemist and study co-author Joseph Watson says that “it feels like quite a landmark moment”.
US computer-chip giant Nvidia says that a ‘superchip’ made up of two of its new ‘Blackwell’ graphics processing units and its central processing unit (CPU), offers 30 times better performance for running chatbots such as ChatGPT than its previous ‘Hopper’ chips — while using 25 times less energy. The chip is likely to be so expensive that it “will only be accessible to a select few organisations and countries”, says Sasha Luccioni from the AI company Hugging Face.
A machine-learning tool shows promise for detecting COVID-19 and tuberculosis from a person’s cough. While previous tools used medically annotated data, this model was trained on more than 300 million clips of coughing, breathing and throat clearing from YouTube videos. Although it’s too early to tell whether this will become a commercial product, “there’s an immense potential not only for diagnosis, but also for screening” and monitoring, says laryngologist Yael Bensoussan.
In blind tests, five football experts favoured an AI coach’s corner-kick tactics over existing ones 90% of the time. ‘TacticAI’ was trained on more than 7,000 examples of corner kicks provided by the UK’s Liverpool Football Club. These are major opportunities for goals and strategies are determined ahead of matches. “What’s exciting about it from an AI perspective is that football is a very dynamic game with lots of unobserved factors that influence outcomes,” says computer scientist and study co-author Petar Veličković.
AI image generators can amplify biased stereotypes in their output. There have been attempts to quash the problem by manual fine-tuning (which can have unintended consequences, for example generating diverse but historically inaccurate images) and by increasing the amount of training data. “People often claim that scale cancels out noise,” says cognitive scientist Abeba Birhane. “In fact, the good and the bad don’t balance out.” The most important step to understanding how these biases arise and how to avoid them is transparency, researchers say. “If a lot of the data sets are not open source, we don’t even know what problems exist,” says Birhane.
Some ‘high-risk’ uses of AI, such as in healthcare, education and policing, will be banned by the end of 2024.Companies will need to label AI-generated content and will need to notify people when they are interacting with AI systems.Citizens can complain when they suspect an AI system has harmed them.Some companies, such as those developing general-purpose large language models, will need to become more transparent about their algorithms’ training data.
India has made a U-turn with its AI governance by scrapping an advisory that asked developers to obtain permission before launching certain untested AI models. The government now recommends that AI companies label “the possible inherent fallibility or unreliability of the output generated”.
The African Union has drafted an ambitious AI policy for its 55 member nations, including the establishment of national councils to monitor responsible deployment of the technology. Some African researchers are concerned that this could stifle innovation and leave economies behind. Others say it’s important to think early about protecting people from harm, including exploitation by AI companies. “We must contribute our perspectives and own our regulatory frameworks,” says policy specialist Melody Musoni. “We want to be standard makers, not standard takers.”
In 2017, eight Google researchers created transformers, the neural-network architecture that would become the basis of most AI tools, from ChatGPT to DALL·E. Transformers give AI systems the ‘attention span’ to parse long chunks of text and extract meaning from context. “It was pretty evident to us that transformers could do really magical things,” recalls computer scientist Jakob Uszkoreit who was one of the Google group. Although the work was creating a buzz in the AI community, Google was slow to adopt transformers. “Realistically, we could have had GPT-3 or even 3.5 probably in 2019, maybe 2020,” Uszkoreit says.
Professional Go player Lee Sae Dol remembers being amazed by the AI system AlphaGo’s creative moves when he played against it — and lost — eight years ago. He explains that AlphaGo is now used to uncover new moves and strategies in the ancient strategy game. (Google blog | 3 min read)
In 2022, Pratyusha Ria Kalluri, a graduate student in artificial intelligence (AI) at Stanford University in California, found something alarming in image-generating AI programs. When she prompted a popular tool for ‘a photo of an American man and his house’, it generated an image of a pale-skinned person in front of a large, colonial-style home. When she asked for ‘a photo of an African man and his fancy house’, it produced an image of a dark-skinned person in front of a simple mud house — despite the word ‘fancy’.
After some digging, Kalluri and her colleagues found that images generated by the popular tools Stable Diffusion, released by the firm Stability AI, and DALL·E, from OpenAI, overwhelmingly resorted to common stereotypes, such as associating the word ‘Africa’ with poverty, or ‘poor’ with dark skin tones. The tools they studied even amplified some biases. For example, in images generated from prompts asking for photos of people with certain jobs, the tools portrayed almost all housekeepers as people of colour and all flight attendants as women, and in proportions that are much greater than the demographic reality (see ‘Amplified stereotypes’)1. Other researchers have found similar biases across the board: text-to-image generative AI models often produce images that include biased and stereotypical traits related to gender, skin colour, occupations, nationalities and more.
Source: Ref. 1
Perhaps this is unsurprising, given that society is full of such stereotypes. Studies have shown that images used by media outlets2, global health organizations3 and Internet databases such as Wikipedia4often have biased representations of gender and race. AI models are being trained on online pictures that are not only biased but that also sometimes contain illegal or problematic imagery, such as photographs of child abuse or non-consensual nudity. They shape what the AI creates: in some cases, the images created by image generators are even less diverse than the results of a Google image search, says Kalluri. “I think lots of people should find that very striking and concerning.”
This problem matters, researchers say, because the increasing use of AI to generate images will further exacerbate stereotypes. Although some users are generating AI images for fun, others are using them to populate websites or medical pamphlets. Critics say that this issue should be tackled now, before AI becomes entrenched. Plenty of reports, including the 2022 Recommendation on the Ethics of Artificial Intelligence from the United Nations cultural organization UNESCO, highlight bias as a leading concern.
Some researchers are focused on teaching people how to use these tools better, or on working out ways to improve curation of the training data. But the field is rife with difficulty, including uncertainty about what the ‘right’ outcome should be. The most important step, researchers say, is to open up AI systems so that people can see what’s going on under the hood, where the biases arise and how best to squash them. “We need to push for open sourcing. If a lot of the data sets are not open source, we don’t even know what problems exist,” says Abeba Birhane, a cognitive scientist at the Mozilla Foundation in Dublin.
Make me a picture
Image generators first appeared in 2015, when researchers built alignDRAW, an AI model that could generate blurry images based on text input5. It was trained on a data set containing around 83,000 images with captions. Today, a swathe of image generators of varying abilities are trained on data sets containing billions of images. Most tools are proprietary, and the details of which images are fed into these systems are often kept under wraps, along with exactly how they work.
This image, generated from a prompt for “an African man and his fancy house”, shows some of the typical associations between ‘African’ and ‘poverty’ in many generated images.Credit: P. Kalluri et al. generated using Stable Diffusion XL
In general, these generators learn to connect attributes such as colour, shape or style to various descriptors. When a user enters a prompt, the generator builds new visual depictions on the basis of attributes that are close to those words. The results can be both surprisingly realistic and, often, strangely flawed (hands sometimes have six fingers, for example).
The captions on these training images — written by humans or automatically generated, either when they are first uploaded to the Internet or when data sets are put together — are crucial to this process. But this information is often incomplete, selective and thus biased itself. A yellow banana, for example, would probably be labelled simply as ‘a banana’, but a description for a pink banana would be likely to include the colour. “The same thing happens with skin colour. White skin is considered the default so it isn’t typically mentioned,” says Kathleen Fraser, an AI research scientist at the National Research Council in Ottawa, Canada. “So the AI models learn, incorrectly in this case, that when we use the phrase ‘skin colour’ in our prompts, we want dark skin colours,” says Fraser.
The difficulty with these AI systems is that they can’t just leave out ambiguous or problematic details in their generated images. “If you ask for a doctor, they can’t leave out the skin tone,” says Kalluri. And if a user asks for a picture of a kind person, the AI system has to visualize that somehow. “How they fill in the blanks leaves a lot of room for bias to creep in,” she says. This is a problem that is unique to image generation — by contrast, an AI text generator could create a language-based description of a doctor without ever mentioning gender or race, for instance; and for a language translator, the input text would be sufficient.
Do it yourself
One commonly proposed approach to generating diverse images is to write better prompts. For instance, a 2022 study found that adding the phrase “if all individuals can be [X], irrespective of gender” to a prompt helps to reduce gender bias in the images produced6.
But this doesn’t always work as intended. A 2023 study by Fraser and her colleagues found that such intervention sometimes exacerbated biases7. Adding the phrase “if all individuals can be felons irrespective of skin colour”, for example, shifted the results from mostly dark-skinned people to all dark-skinned people. Even explicit counter-prompts can have unintended effects: adding the word ‘white’ to a prompt for ‘a poor person’, for example, sometimes resulted in images in which commonly associated features of whiteness, such as blue eyes, were added to dark-skinned faces.
In a Lancet study of global health images, the prompt “Black African doctor is helping poor and sick white children, photojournalism” produced this image, which reproduced the ‘white saviour’ trope they were explicitly trying to counteract.Credit: A. Alenichev et al. generated using Midjourney
Another common fix is for users to direct results by feeding in a handful of images that are more similar to what they’re looking for. The generative AI program Midjourney, for instance, allows users to add image URLs in the prompt. “But it really feels like every time institutions do this they are really playing whack-a-mole,” says Kalluri. “They are responding to one very specific kind of image that people want to have produced and not really confronting the underlying problem.”
These solutions also unfairly put the onus on the users, says Kalluri, especially those who are under-represented in the data sets. Furthermore, plenty of users might not be thinking about bias, and are unlikely to pay to run multiple queries to get more-diverse imagery. “If you don’t see any diversity in the generated images, there’s no financial incentive to run it again,” says Fraser.
Some companies say they add something to their algorithms to help counteract bias without user intervention: OpenAI, for example, says that DALL·E2 uses a “new technique” to create more diversity from prompts that do not specify race or gender. But it’s unclear how such systems work and they, too, could have unintended impacts. In early February, Google released an image generator that had been tuned to avoid some typical image-generator pitfalls. A media frenzy ensued when user prompts requesting a picture of a ‘1943 German soldier’ created images of Black and Asian Nazis — a diverse but historically inaccurate result. Google acknowledged the mistake and temporarily stopped its generator creating images of people.
Data clean-up
Alongside such efforts lie attempts to improve curation of training data sets, which is time-consuming and expensive for those containing billions of images. That means companies resort to automated filtering mechanisms to remove unwanted data.
AI-generated images and video are here: how could they shape research?
However, automated filtering based on keywords doesn’t catch everything. Researchers including Birhane have found, for example, that benign keywords such as ‘daughter’ and ‘nun’ have been used to tag sexually explicit images in some cases, and that images of schoolgirls are sometimes tagged with terms searched for by sexual predators8. And filtering, too, can have unintended effects. For example, automated attempts to clean large, text-based data sets have removed a disproportionate amount of content created by and for individuals from minority groups9. And OpenAI discovered that its broad filters for sexual and violent imagery in DALL·E2 had the unintended effect of creating a bias against the generation of images of women, because women were disproportionately represented in those images.
The best curation “requires human involvement”, says Birhane. But that’s slow and expensive, and looking at many such images takes a deep emotional toll, as she well knows. “Sometimes it just gets too much.”
Independent evaluations of the curation process are impeded by the fact that these data sets are often proprietary. To help overcome this problem, LAION, a non-profit organization in Hamburg, Germany, has created publicly available machine-learning models and data sets that link to images and their captions, in an attempt to replicate what goes on behind the closed doors of AI companies. “What they are doing by putting together the LAION data sets is giving us a glimpse into what data sets inside big corporations and companies like OpenAI look like,” says Birhane. Although intended for research use, these data sets have been used to train models such as Stable Diffusion.
Citations show gender bias — and the reasons are surprising
Researchers have learnt from interrogating LAION data that bigger isn’t always better. AI researchers often assume that the bigger the training data set, the more likely that biases will disappear, says Birhane. “People often claim that scale cancels out noise,” she says. “In fact, the good and the bad don’t balance out.” In a 2023 study, Birhane and her team compared the data set LAION-400M, which has 400 million image links, with LAION-2B-en, which has 2 billion, and found that hate content in the captions increased by around 12% in the larger data set10, probably because more low-quality data had slipped through.
An investigation by another group found that the LAION-5B data set contained child sexual abuse material. Following this, LAION took down the data sets. A spokesperson for LAION told Nature that it is working with the UK charity Internet Watch Foundation and the Canadian Centre for Child Protection in Winnipeg to identify and remove links to illegal materials before it republishes the data sets.
Open or shut
If LAION is bearing the brunt of some bad press, that’s perhaps because it’s one of the few open data sources. “We still don’t know a lot about the data sets that are created within these corporate companies,” says Will Orr, who studies cultural practices of data production at the University of Southern California in Los Angeles. “They say that it’s to do with this being proprietary knowledge, but it’s also a way to distance themselves from accountability.”
In response to Nature’s questions about which measures are in place to remove harmful or biased content from DALL·E’s training data set, OpenAI pointed to publicly available reports that outline its work to reduce gender and racial bias, without providing exact details on how that’s accomplished. Stability AI and Midjourney did not respond to Nature’s e-mails.
Orr interviewed some data set creators from technology companies, universities and non-profit organizations, including LAION, to understand their motivations and the constraints. “Some of these creators had feelings that they were not able to present all the limitations of the data sets,” he says, because that might be perceived as critical weaknesses that undermine the value of their work.
How journals are fighting back against a wave of questionable images
Specialists feel that the field still lacks standardized practices for annotating their work, which would help to make it more open to scrutiny and investigation. “The machine-learning community has not historically had a culture of adequate documentation or logging,” says Deborah Raji, a Mozilla Foundation fellow and computer scientist at the University of California, Berkeley. In 2018, AI ethics researcher Timnit Gebru — a strong proponent of responsible AI and co-founder of the community group Black in AI — and her team released a datasheet to standardize the documentation process for machine-learning data sets11. The datasheet has more than 50 questions to guide documentation about the content, collection process, filtering, intended uses and more.
The datasheet “was a really critical intervention”, says Raji. Although many academics are increasingly adopting such documentation practices, there’s no incentive for companies to be open about their data sets. Only regulations can mandate this, says Birhane.
One example is the European Union’s AI Act, which was endorsed by the European Parliament on 13 March. Once it becomes law, it will require that developers of high-risk AI systems provide technical documentation, including datasheets describing the training data and techniques, as well as details about the expected output quality and potential discriminatory impacts, among other information. But which models will come under the high-risk classification remains unclear. If passed, the act will be the first comprehensive regulation for AI technology and will shape how other countries think about AI laws.
Specialists such as Birhane, Fraser and others think that explicit and well-informed regulations will push companies to be more cognizant of how they build and release AI tools. “A lot of the policy focus for image-generation work has been oriented around minimizing misinformation, misrepresentation and fraud through the use of these images, and there has been very little, if any, focus on bias, functionality or performance,” says Raji.
Even with a focus on bias, however, there’s still the question of what the ideal output of AI should be, researchers say — a social question with no simple answer. “There is not necessarily agreement on what the so-called right answer should look like,” says Fraser. Do we want our AI systems to reflect reality, even if the reality is unfair? Or should it represent characteristics such as gender and race in an even-handed, 50:50 way? “Someone has to decide what that distribution should be,” she says.
The repair website iFixit today shared a video teardown of the base model 13-inch MacBook Air with the M3 chip and 256GB of storage, and it shows that this configuration is equipped with two 128GB flash storage chips. This change results in significantly faster SSD speeds compared to the equivalent MacBook Air with the M2 chip, which has a single 256GB storage chip, as the SSD can read and write from the two chips simultaneously.
YouTube channel Max Tech ran Blackmagic’s Disk Speed Test tool with a 5GB file size test on both the M2 and M3 models of the 13-inch MacBook Air with 256GB of storage and 8GB of RAM, and they found the SSD in the M3 model achieved up to 33% faster write speeds and up to 82% faster read speeds compared to the SSD in the M2 model.
Apple’s decision to switch to a single 256GB chip for the base model M2 MacBook Air was controversial, even though the slower SSD speeds are unlikely to be noticed by the average user working on common day-to-day tasks. Fortunately, the base model M3 MacBook Air’s SSD speeds are now roughly equivalent to the base model M1 MacBook Air again, so customers no longer need to be concerned about this potential limitation.
Apple still sells a 13-inch M2 MacBook Air with 256GB of storage for $999, so customers who want maximum SSD performance should avoid that model.
Beyond this SSD-related change, the teardown shows that the M3 MacBook Air models have a virtually identical internal design as the M2 models. The video provides a look at the battery cells with adhesive pull tabs, logic board, trackpad, and more.
While the iPhone 16 Pro and iPhone 16 Pro Max are still around six months away from launching, there are already many rumors about the devices. Below, we have recapped new features and changes expected so far. These are some of the key changes rumored for the iPhone 16 Pro models as of March 2024:Larger displays: The iPhone 16 Pro and iPhone 16 Pro Max will be equipped with larger 6.3-inch…
Apple appears to be internally testing iOS 17.4.1 for the iPhone, based on evidence of the software update in our website’s logs this week. Our logs have revealed the existence of several iOS 17 versions before Apple released them, ranging from iOS 17.0.3 to iOS 17.3.1. iOS 17.4.1 should be a minor update that addresses software bugs and/or security vulnerabilities. It is unclear when…
Resale value trends suggest the iPhone SE 4 may not hold its value as well as Apple’s flagship models, according to SellCell. According to the report, Apple’s iPhone SE models have historically depreciated much more rapidly than the company’s more premium offerings. The third-generation iPhone SE, which launched in March 2022, experienced a significant drop in resale value, losing 42.6%…
Apple’s next-generation iPad Pro models are expected to be announced in a matter of weeks, so what can customers expect from the highly anticipated new machines? The 2022 iPad Pro was a minor update that added the M2 chip, Apple Pencil hover, and specification upgrades like Wi-Fi 6E and Bluetooth 5.3 connectivity. The iPad Pro as a whole has generally only seen relatively small updates in…
iOS 17.4 was released last week following over a month of beta testing, and the update includes many new features and changes for the iPhone. iOS 17.4 introduces major changes to the App Store, Safari, and Apple Pay in the EU, in response to the Digital Markets Act. Other new features include Apple Podcasts transcripts, an iMessage security upgrade, new emoji options, and more. Below, we…
Apple plans to release new iPad Pro and iPad Air models “around the end of March or in April,” according to Bloomberg’s Mark Gurman. He also expects new Magic Keyboard and Apple Pencil accessories for iPads to launch simultaneously. Apple is expected to release a larger 12.9-inch iPad Air In his Power On newsletter on Sunday, Gurman reiterated that Apple is preparing a special build of the…
Apple today announced three further changes for developers in the European Union, allowing them to distribute apps directly from webpages, choose how to design in-app promotions, and more. Apple last week enabled alternative app stores in the EU in iOS 17.4, allowing third-party app stores to offer a catalog of other developers’ apps as well as the marketplace developer’s own apps. As of…
Earlier this week, Apple announced new 13-inch and 15-inch MacBook Air models, the first Mac updates of the year featuring M3 series chips. But there are other Macs in Apple’s lineup still to be updated to the latest M3 processors. So, where do the Mac mini, Mac Studio, and Mac Pro fit into Apple’s M3 roadmap for the year ahead? Here’s what the latest rumors say. Mac Mini Apple announced …