As Elon Musk explains, "We are headed towards a situation where AI is vastly smarter than humans and I think that time frame is less than five years from now." In fact, the global AI software market is forecast to grow rapidly in the coming years, growing with a compound annual growth rate of 42.2% from 2020 to 2027. According to PwC, AI could contribute up to $15.7 trillion to the global economy by 2030, confirming the thesis that AI is likely to affect every aspect of society. While AI is still in its early stages, many people highlight the need for international treaties and laws to regulate AI research. AI has the potential to reduce human error, make better and informed decisions and take on human risks. However, what happens if AI falls into the wrong hands and is programmed to do something devastating? How can we stop AI from being programmed to do something beneficial but developing destructive methods for achieving this goal? Furthermore, how do we deal with legal challenges concerning AI (i.e. car crash caused by autonomous vehicle - who is responsible)?
top of page
bottom of page
Russia against other countries (Estonia, France, USA, Germany etc ) revealed new war grounds. It has disturbed elections, influenced public opinion, causing internal struggles. It is particularly problematic for democracies (because they are built on the idea that people are unbiased and can make rational desitions).
Regarding legal issues, the Council of Europe emphasis the danger of AI. For them, large scale data gathering is concerning and represents risks for our freedom. It does not stop them from mass collecting information on their citizens (eg: facial recognition).
Furthermore, I don't think that there is such thing as “good hands."
AI is a political and speculative tool ruling over an obscure world, hidden behind false good intentions. Judicially, it is complicated to impose a legal framework to software and AI because it is continually evolving and the knowledge is available online. For me, as long as states keep the status quo, and we have a critical mind, we would not become puppets.
This question is fundamental as it makes us think about progress in general - could we and should we stop progress ? As always, it is frightening. People were constantly afraid of disruptive innovations that were to change their habits (or the way the world would's "articulation").
Nonetheless, I think that AI should be regulated. While technophiles would say that we were (wrongly) reticient to embrace cellphones, WiFi, 4G etc. I would argue back that, the more we evolve, the more powerful our inventions and technique get - it is exponential. The destructive potential of IA is multilayered and tremendously high.
In addition, if not regulated, it will allow the tiniest fraction of the population (namely Zuckerberg, Musk and Bezos) to have access to data that would grant them way too much power. As you said, what happens if AI falls into the wrong hands ? Now, the mentioned "hands" belong to 5/6 people who happen to be the wealthiest people on the planet - sounds like an apocalyptic movie scenario right?
The State has always tried to regulate monopolies. However, until now, they were only considered in the financial/economic realm (Standard Oil that was worth $1 trillion was dismantled). Maybe we should think about a way to regulate Data and who controls them in order to protect the clueless majority from the almighty minority.