Artificial Intelligence (AI) has become an integral part of our daily lives, from facial recognition systems to digital assistants and online chatbots. However, the rapid advancement of this technology has raised concerns about its potential misuse. One of the most alarming possibilities is the creation of new bioweapons using AI. Bioweapons, toxic substances or organisms designed to cause disease and death, are already considered war crimes under the 1925 Geneva Protocol and various international humanitarian law treaties. Yet, experts fear that AI could autonomously develop new bioweapons in labs, posing a significant threat to human life.
Prime Minister Rishi Sunak has highlighted the potential danger of chemical and biological weapons developed with AI. Researchers involved in AI-based drug discovery have expressed concerns that terrorists could manipulate the technology to create toxic nerve agents. These molecules could potentially be more lethal than VX, a nerve agent developed by the UK’s Defence Science and Technology Lab in the 1950s, which kills through muscle paralysis.
🤖Robotic warfare revolution in China😳
"Little Whirlwind" is a robot with a self-destruct function, essentially a movable landmine.
With its optic system, it can automatically detect, identify and track enemy vehicles, it then rolls up to the target and detonate its warhead. pic.twitter.com/jld8iPrA36
— Zhao DaShuai 无条件爱国🇨🇳 (@zhao_dashuai) October 19, 2023
Another area of concern is autonomous vehicles. Self-driving cars use cameras and depth-sensing ‘LiDAR’ units to navigate their surroundings. However, even minor software errors could lead to catastrophic accidents, such as a car plowing into pedestrians or running a red light. The self-driving vehicle market is projected to be worth nearly $56 billion in the UK by 2035, but widespread adoption hinges on these vehicles proving safer than human drivers.
AI also poses a potential public health crisis. Without proper regulation, AI tools like ChatGPT could facilitate the spread of health misinformation online. This could exacerbate public health crises, helping deadly microorganisms propagate and spread, potentially causing more fatalities than Covid-19.
Elon’s reluctance to use Starlink militarily suggests he will not build humanoid robots for the military. Google/Amazon will likely say no too. That gives an opportunity to a robotics startup to completely overhaul and re-arm the US military with millions of robot soldiers. pic.twitter.com/dupjjdAdkB
— Brad (@Brad08414464) October 10, 2023
The unchecked development of drone technology for military applications is another potential hazard. Drones controlled remotely with AI could undertake actions that cause real harm. The loss of control over important decisions to AI-powered software is a growing concern. As humans increasingly delegate critical decisions to AI, the risk of catastrophic consequences due to poorly programmed or biased algorithms rises.
AI software is already prevalent in society, from facial recognition at security barriers to digital assistants and online chatbots like ChatGPT. However, these tools can produce errors and “hallucinations,” leading to pressing problems in AI development. Large machines running on AI software are infiltrating factories and warehouses, and malfunctions have already had tragic consequences.
Finally, the specter of killer robots looms large in popular culture, thanks to movies like The Terminator. While this scenario may seem far-fetched, experts warn that it could become a reality if we do not implement robust safeguards. As physicist and AI expert Max Tegmark suggests, the survival of the fittest rule could apply to AI, leading to the demise of less intelligent species – including humans.
In conclusion, while AI holds immense potential for societal advancement, it also presents significant risks. It is crucial to regulate AI development and use carefully to prevent these worst-case scenarios from becoming reality.