When most of us think about artificial intelligence (AI), we tend to think of chatbots like Grok, ChatGPT, and Claude. Those chatbots are merely interfaces between us humans and the real computer programming underneath; the engines, so to speak, that power artificial intelligence include large language models and other computer models that can "learn" and apply new information in a variety of contexts. AI is far more than just a fancier calculator.
An enormously important issue in AI ethics is the "alignment problem," which is essentially "What values are the AI algorithms prioritizing?" Who is choosing what is a "best" or "better" outcome for the AI? "Fair" has different meanings to everyone. Computer programs can be designed to be extremely--even relentlessly-- efficient in optimizing outcomes according to the input priorities. It's not malice by some superintelligence; it's just a computer program doing what it was designed to do with efficiency unimaginable in a human.
But suppose a media-linked AI got tasked to optimize oil prices without being told to avoid warfare? I imagine the headlines that the AI would cause to rise to the top of news feeds could influence decisionmakers to a point that we could end up shooting missiles at Iran or some other oil-producing country. (The recipients of those bombs would not care that they were targeted as a result of an algorithm.)
What if an AI got used to enforce secrecy at the expense of other possible priorities? Do we want a world where computer programs are being used to keep everyone from communicating to each other about things that are important because we might inadvertently say something that at some point was labeled "classified" or "intellectual property"?
We see in the headlines that AI is a bubble that is possibly headed for a "bust." That doesn't mean AI is nothing to take seriously, though. The dot.com bubble burst, and tech companies didn't disappear. Instead, the big companies survived and took over market share from the smaller companies that didn't survive. If an AI bubble bursts, we can expect to see even greater dominance in that area by companies that already have a great deal of power over commerce, access, and public opinion, knowledge, and sentiment. That is definitely something to take seriously!
AI chatbots, as many have noticed, tend to give "in-the-box" solutions and often even "slop." Monopolies on the information we get from AI-driven search engines and chatbots will simultaneously exploit human creativity and diminish financial rewards to the authentic creators. Such monopolies will also tend to prevent the spread of solutions that don't result in profits to the monopoly-possessing entities, again not out of malice but because the companies that develop them and give them their priorities are going to want to maximize profits.
No comments:
Post a Comment