Artificial intelligence for the most part is a curious novelty, at least insofar as the average consumer is concerned. However, just as the innocuous firework developed into gunpowder, so too it seems artificial intelligence technology will find application within armed conflict, at least according to Elon Musk. The tech entrepreneur recently took to Twitter with grave warnings about the potential of artificial intelligence, if left unchecked, to spawn armed conflicts.
Musk’s comments regarding artificial intelligence came after Russian President Vladimir Putin curiously predicted that whichever country succeeds first in developing AI would become the “ruler of the world.” Though most media focus has been relegated to American companies, Russia has its own host of tech startups fueling advances in AI.
The popularity of Kaggle, an AI development platform recently acquired by Google, is one way of measuring the level of AI research occurring within the country. By that metric, Russia is fourth in the world for Kaggle users, falling behind just the United States, China and India but edging out the United Kingdom. Current AI projects range from the innocuous to military focused, including equipping drones and fighter jets with the tech to using it for automated court stenography through speech recognition.
Putin’s comments were issued to students at a career guidance seminar. The Russian president went on to state that in light of this, he hopes no country succeeds in monopolizing AI tech and that while Russia will be an industry leader, the country would be willing to share advancements with other governments.
Elon Musk painted a grimmer picture compared to Putin’s tech-sharing image of the future. The SpaceX and Tesla CEO cautioned that AI could be responsible for starting the next world war if their decision-making capabilities advanced to the point that national armed forces co-opted them for strategic planning. Musk singled out China and Russia but further cautioned that any country with significant computer science programs could pose a threat. According to his proposed scenario, an AI responsible for missile launches could hypothetically decide that a preemptive nuclear strike against a rival nation is the best course of action, thus starting a war without any input from military or government officials.
This isn’t the first time Musk has cautioned against AI. His alarmist rhetoric concerning the development of AI sometimes borders on the extreme, once calling it a threat to the very “existence of human civilization.” That said, given the young age and unclear potential of AI tech, some level of caution is sensible.