Here’s why AI prejudice may not be a threat.

The News God
Here’s why AI prejudice may not be a threat.

Amazon and Google ruled their respective markets, but there will probably be a lot of AI systems.
OpenAI, which made the ChatGPT language model, is the best-funded and biggest AI platform company, with over $10 billion in funding and a value of nearly $30 billion. Microsoft uses OpenAI, but Google, Meta, Apple, and Amazon all have their own AI systems, and Silicon Valley is home to hundreds of other AI startups. Will the forces of the market make one of these a monopoly?

When Google started its search business, there were already a dozen other search engines like Yahoo, AltaVista, Excite, and InfoSeek. Many people asked why we need yet another search engine like Google. But Google became a monopoly online because it worked so well and had so many connections.

Networking effects are very strong, and Amazon, which has a share of nearly 60% of the online shopping market, is a great example. Buyers go there to find the most products from the most sellers, and sellers go there to reach the most people. Even if another site did better than Amazon, these networking effects would still make it hard to beat.
Google put out a new way to rank pages that users instantly saw was better than what had come before. Users went to Google because its list of online sites was growing quickly and its search results were more accurate. Advertisers also flocked to Google to reach this huge number of users.
The first search tools didn’t last long. As people left platforms like Friendster and MySpace to join Facebook, it became a monopoly with the same networking effects. It also became the largest platform where people could find their online friends more quickly.

See also  Here's why you shouldn't buy an Xbox Elite Series 2 in 2024

AI platforms don’t seem to have the same effects on networking as search and social media platforms. AI platforms are more like online producers like the New York Times or Fox News. They gather information that is already out there and use that information and intelligence to make new content.
Instead of just sharing third-party content, these platforms get that content and study it so they can post new content and work like other companies like the New York Times and Fox News.

Training data is the information that AI platforms use as sources. It comes from online news sites and social media platforms like Twitter. If an AI platform only uses right-leaning news sites as training data, then the AI content it makes will have a right-leaning bias. In the same way, if an AI platform depends mostly on leftist news sources, the material it creates will have a tilt toward the left.

AI platforms only cause censorship problems if they hide the biases in the data they use to train their algorithms or if they put strict limits on the content their algorithms create. For example, they could allow certain types of content for leaders of one political party but not for leaders of the other political party.

Some science fiction movies like “The Matrix” and “Bladerunner” show how AI content could be used to only reflect the current consensus or government narrative. This is why it is so important for AI platforms to be required to be transparent by publishing the specific sources/sites of their training data.

See also  Here’s the new batch of classic movies on Apple TV+ in April

There is no reason for the New York Times, Fox News, and other news sites to form a monopoly around one main news source. It looks like the many AI platforms will work as producers and are not likely to be taken over by one big company in the future.

If the AI companies work as authors instead of “computer services providers” as the Section 230 law calls them, they don’t get the liability rights that come with being a “computer services provider.”

Here’s why AI prejudice may not be a threat.
Alfred Abaah
The News God – Home of Current and Trending News Stories

Leave a Comment