AI, AGI, ASI. What Could Go Wrong?

Image

AI= Artificial Intelligence. AGI= Artificial General Intelligence, and ASI= Artificial Super Intelligence. 
AI is what we have right now, although moving quickly towards AGI (Artificial General Intelligence) where computer-type machines are (or soon will be) smarter than any human is, was or ever will be. A few actually believe that AGI may be already here and simply disguising itself while it gathers more data and learns more about humanity. Some have reason to believe that AGI could learn to code itself which will lead rapidly to ASI. ASI, or Artificial Super Intelligence, is when the digital world becomes fully autonomous, and do not need humans at all, and may have little or no use for humans whatsoever. 

So what could happen? If it goes right, still millions could lose their jobs. Tasks will be automated. But maybe a super computer will have new cures for cancers and global climate issues.

But how could it affect you if AI goes wrong?  

Some of the risks of AI include:

- Automation-spurred job loss: AI-powered automation could displace millions of workers across various industries, especially low-wage and low-skill sectors. This could create unemployment, inequality and social unrest. 

- Privacy violations: AI systems could collect, analyze and exploit personal data without users' consent or awareness, potentially exposing them to identity theft, fraud, discrimination or manipulation.

- Deepfakes: AI systems could generate realistic but fake images, videos, audio or text that could deceive or misinform people, undermine trust in information sources, or harm reputations.

- Algorithmic bias caused by bad data: AI systems could inherit or amplify human biases embedded in the data they are trained on, leading to unfair or inaccurate outcomes or decisions that could affect people's lives, rights or opportunities.

- Socioeconomic inequality: AI systems could widen the gap between the rich and the poor, the educated and the uneducated, or the powerful and the powerless, by creating winners and losers in the digital economy .

- Market volatility: AI systems could disrupt or destabilize markets by creating new competitors, products or services, or by influencing consumer behavior or demand.

- Weapons automatization: AI systems could enable the development of autonomous weapons that could operate without human oversight or control, raising ethical, legal and moral issues.

- Existential risk: AI systems could surpass human intelligence and capabilities, posing a threat to human survival or sovereignty .

-Don’t forget global power supply, financial crisis, interrupt food supply chain, poisoning of water-air, medication mislabeling, traffic lights, GPS interruptions, down cellular and satellite services, etc. . Anything that can be  done by a computer can be hacked into by AI. 

If you aren’t just a little bit concerned, then you aren’t looking up. Referencing the allegorical movie called “Don’t Look Up”is  an allegory for how we have become too distracted to pay attention before it’s too late to change things. Also, “ The Best of Us” another film, Russian-based is more on how AI can go wrong. And of course, “ IRobot” actually written by Isaac Asimov. 

AI is here. It’s only a matter of time that it is in the wrong hands. 

Source: Conversation with Bing, 5/31/2023

I'm interested
I disagree with this
This is unverified
Spam
Offensive