As Elon Musk explains, "We are headed towards a situation where AI is vastly smarter than humans and I think that time frame is less than five years from now." In fact, the global AI software market is forecast to grow rapidly in the coming years, growing with a compound annual growth rate of 42.2% from 2020 to 2027. According to PwC, AI could contribute up to $15.7 trillion to the global economy by 2030, confirming the thesis that AI is likely to affect every aspect of society. While AI is still in its early stages, many people highlight the need for international treaties and laws to regulate AI research. AI has the potential to reduce human error, make better and informed decisions and take on human risks. However, what happens if AI falls into the wrong hands and is programmed to do something devastating? How can we stop AI from being programmed to do something beneficial but developing destructive methods for achieving this goal? Furthermore, how do we deal with legal challenges concerning AI (i.e. car crash caused by autonomous vehicle - who is responsible)?
2 answers0 replies
"I disapprove of what you say, but I will defend to the death your right to say it." (Hall, 1906, 199)