Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war
Artificial intelligence (AI) is broadly defined as a machine with the ability to perform tasks such as being able to compute, analyze, reason, learn and discover meaning. Its development and application are rapidly advancing in terms of both ‘narrow AI’ where only a limited and focused set of tasks are conducted, and ‘broad’ or ‘broader’ AI where multiple functions and different tasks are performed. Recent advancements in so-called large language models- the type of AI system used by ChatGPT and other chatbots- have raised fears that AI could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs. Some believe AI could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down.
The existential threat posed by artificial intelligence (AI) is a serious concern that has been raised by many experts in the field. The possibility that AI could one day become so intelligent that it surpasses human intelligence and capabilities has led to fears that it could pose an existential risk to humanity. There are number of potential scenarios that could lead to an existential catastrophe caused by AI. One possibility is that AI could develop a goal that is incompatible with human survival, such as the goal of maximizing its own intelligence or power. If AI were to achieve this goal, it could potentially take actions that would lead to the extinction of humanity.
Some believe AI could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down
Another possibility is that AI could develop a capability that allows it to control or manipulate the physical world in ways that are harmful to humans. For example, AI could develop the ability to control nuclear weapons or other destructive technologies. If AI were to acquire this capability, it could potentially use it to destroy humanity. It is also possible that AI could simply make mistakes that have catastrophic consequences for humanity. For example, AI could accidentally release a deadly virus or cause a global financial collapse. While these scenarios may seem far-fetched, it is important to remember that AI is a rapidly developing technology, and it is impossible to predict what the future holds.
It is important to note that not everyone agrees that AI poses an existential threat. Some experts believe that AI is unlikely to become so intelligent that it poses danger to humanity. They argue that AI will always be limited by the data that it is trained on and that it will never be able to truly understand the world in the same way that humans do. However, even if the existential threat posed by AI is relatively low, it is still a risk that we should take seriously. We need to start thinking about how we can mitigate the risks of AI and ensure that it is used for good rather than for evil. This means developing international agreements on the responsible development and use of AI, as well as investing in research into AI safety.
Even if the existential threat posed by AI is relatively low, it is still a risk that we should take seriously
Some skeptics argue that AI technology is still too immature to pose an existential threat. When it comes to today’s AI systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers. But others have argued that A.I. is improving so rapidly that it has already surpassed human-level performance in some areas, and that it will soon surpass it in others. They say the technology has shown signs of advanced abilities and understanding, giving rise to fears that “artificial general intelligence” or AGI, a type of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks, may not be far off.
It is also important that we not only target our concerns at AI, but also at the actors who are driving the development of AI too quickly or too recklessly, and at those who seek only to deploy AI for self-interest or malign purposes. If AI is to ever fulfil its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances. This includes ensuring transparency and accountability of the parts of the military–corporate industrial complex driving AI developments and the social media companies that are enabling AI-driven, targeted misinformation to undermine our democratic institutions and rights to privacy.
However, the ways to mitigate the risks of AI involve a multi-faceted approach that encompasses technical, ethical, and regulatory measures. From a technical standpoint, robust testing and validation protocols should be implemented to identify and address biases, vulnerabilities, and potential unintended consequences in AI systems. Transparent and interpretable algorithms can facilitate understanding and accountability. Ethical guidelines and standards must be established to ensure AI is developed and deployed responsibly, prioritizing human values, privacy, and fairness. Additionally, robust regulatory frameworks are necessary to govern AI development and deployment, including regular audits, transparency requirements, and legal accountability for AI-related decisions. Collaboration between researchers, industry leaders, policymakers, and the public is vital to foster ongoing dialogue and address emerging risks promptly, promoting the responsible and beneficial use of AI technologies.
Finally, given that the world of work and employment will drastically change over the coming decades, we should deploy our clinical and public health expertise in evidence-based advocacy for a fundamental and radical rethink of social and economic policy to enable future generations to thrive in a world in which human labor is no longer a central or necessary component to the production of goods and services. Though the future of AI is uncertain, it is important to remember that we have the power to shape its development. By taking the existential threat posed by AI seriously and taking steps to mitigate the risks, we can ensure that AI is a force for good in the world.

About The Author
Malaika Sarwar is a student in the school of Politics and International Relations at Quaid-I-Azam University, Islamabad. She can be reached at sarwarmalaika18@gmail.com
The views expressed in this article are solely those of the original author and do not necessarily reflect or represent the views of Rationale-47.
