Book & Author
Kissinger, Schmidt, & Huttenlocher: The Age of AI – And Our Human Future
By Dr Ahmed S. Khan
Chicago, IL

 

Emerging technologies of the 4th Industrial Revolution (4IR) are dramatically changing society in ways we live, work, interact with others, and educate our students. These changes are enabled by such emerging technologies as Artificial Intelligence (AI), Big Data, Internet of Things (IoT), Augmented Reality, Blockchain, Robotics, Drones, Nanotechnologies, Genomics and Gene Editing, Quantum Computing, and Smart Manufacturing. The interplay of these technologies is impacting all sectors across the globe at unprecedented speed, and the time needed to remake the world is getting shorter (less than a year) in contrast to previous industrial revolutions — (1) Steam- and water-powered mechanization (centuries), (2) mass production and electrical power (multiple decades), and (3) Electronics and IT (decades).

Among these emerging technologies, Artificial Intelligence (AI) is becoming the most transformative technology in the history of humankind. The stakeholders, policy shapers, and decision makers of the present and future need to be educated not only about AI’s technical capabilities, but also about its social and ethical implications and its intended and unintended consequences, so that they can guide society to its appropriate applications, alert society to its failures, and provide a vision to society in helping to solve its associated challenges and issues in a wise and humane manner.

The recent popularity of OpenAI’s chatbot ChatGPT-4 has led to a rush of commercial investment in new generative AI tools— trained on large pools of data — that can produce human-like text, exquisite images, melodic music, and reliable computer code. However, ethical and societal concerns have been expressed regarding biased formulation of algorithms, which can generate false outputs and promote discriminatory outcomes.

The unregulated use of AI has arrived in the battlefield. The Economist in its April 11, 2024 story “Israel’s use of AI in Gaza is coming under closer scrutiny: Do the humans in Israel’s army have sufficient control over its technology?” observes: “FOR OVER a decade military experts, lawyers and ethicists have grappled with the question of how to control lethal autonomous weapon systems, sometimes pejoratively called killer robots. One answer was to keep a ‘man in the loop’—to ensure that a human always approved each decision to use lethal force…That nightmarish vision of war with humans ostensibly in control but shorn of real understanding of their actions, killing in rote fashion, seems to have come to pass in Gaza…The Israel Defence Forces (IDF) have reportedly developed artificial-intelligence (AI) tools known as ‘The Gospel’ and ‘Lavender’…” to target Gaza — where more than 30,000 people have been killed.

There have been calls for disclosure laws to force AI providers to open up their systems for third-party scrutiny. In the EU, the legislation — the Artificial Intelligence (AI) Act — aims to strengthen rules around data quality, transparency, human oversight, and accountability. On March 13, 2024, the European parliament approved the world’s first set of regulatory protocols to govern the Artificial Intelligence (AI). Devised in 2021, the EU AI Act classifies AI technology into categories of risk — ranging from “unacceptable” (requiring a ban) to high, medium and low levels of threats. The regulation is expected to be enforced in May 2024.

In the United States, the Biden administration in a February 2023 executive order directed federal agencies to root out bias in their design and use of new technologies, including AI, and to protect the American public from algorithmic discrimination and AI-related harm. In May 2023, the Biden administration announced a new initiative that focuses on three pathways: 1. New investments to power responsible American AI research and development (R&D); 2. Public assessments of existing generative AI systems; and 3. Policies to ensure the US government is leading by example on mitigating AI risks and harnessing AI opportunities. On October 30, 2023, President Biden issued an Executive order to establish new standards for AI safety and security, and to protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, and advance American leadership around the world.

In The Age of AI — and Our Human Future, authors Kissinger, Schmidt and Huttenlocher — three accomplished thinkers — come together to explain how Artificial Intelligence is transforming human society with respect to security, economics, order, reality, and knowledge, and discuss its risks and benefits. In its seven chapters — 1. We Are, 2. How We Got Here: Technology and Human Thought,  3. From Turing to Today—and Beyond, 4. Global Network Platforms, 5. Security and World Order, 6. AI and Human Identity, and 7. AI and the Future — the book addresses the key questions: What do AI-enabled innovations in health, biology, space, and quantum physics look like? What do AI-enabled "best friends" look like, especially to children? What does AI-enabled war look like? Does AI perceive aspects of reality humans do not? When AI participates in assessing and shaping human action, how will humans change? And what, then, will it mean to be human?

Henry A. Kissinger (May 27, 1923 – November 29, 2023), recipient of the Nobel peace prize (1973), served as the 56th Secretary of State (1973-1977) and the Assistant to the President for National Security Affairs (1969-1975). He also served as the chairman of his consulting firm, Kissinger Associates. Henry Kissinger was a prolific writer, his major works include A World Restored: Metternich, Castlereagh and the Problems of Peace, 1812,-22, Nuclear Weapons and Foreign Policy, The Necessity for Choice: Prospects of American Foreign Policy, White House Years, Years of Upheaval, Diplomacy, Years of Renewal, Does America Need a Foreign Policy? Toward a Diplomacy for the 21st Century, Ending the Vietnam War: A History of America's Involvement in and Extrication from the Vietnam War, Crisis: The Anatomy of Two Major Foreign Policy Crises, On China, and World Order.

Eric Schmidt, a technologist, entrepreneur, and philanthropist, joined Google (2001) and helped it become a global technological leader. He is the author of Trillion Dollar Coach: The Leadership Playbook of Silicon Valley's Bill Campbell, How Google Works, The New Digital Age: Transforming Nations, Businesses, and Our Lives.

Daniel Huttenlocher is the inaugural dean of the MIT Schwarzman College of Computing. His academic and industrial experience includes computer science faculty member at Cornell and MIT, researcher and manager at the Xerox Palo Alto Research Center (PARC), and CTO of a fintech start-up. Currently, he serves as the chair of the board of the John D. and Catherine T. MacArthur Foundation, and as a member of the boards of Amazon and Corning.

Describing the objective of the book, the authors state: “This book is about a class of technology that augurs a revolution in human affairs. AI—machines that can perform tasks that require human-level intelligence—has rapidly become a reality….Computer scientists and engineers have developed technologies, particularly machine-learning methods using ‘deep neural networks,’ capable of producing insights and innovations that have long eluded human thinkers and of generating text, images, and video that appear to have been created by humans….Accordingly, humanity is developing a new and exceedingly powerful mechanism for exploring and organizing reality—one that remains, in many respects, inscrutable to us.”

”At every turn,” the authors note, “humanity will have three primary options: confining AI, partnering with it, or deferring to it. These choices will define AI's application to specific tasks or domains, reflecting philosophical as well as practical dimensions.…AI will transform our approach to what we know, how we know, and even what is knowable.”

Reflecting on the scope of human-machine partnerships, the authors state: “The AI era will elevate a concept of knowledge that is the result of partnership between humans and machines. Together, we (humans) will create and run (computer) algorithms that will examine more data more quickly, more systematically, and with a different logic than any human mind can. Sometimes, the result will be the revelation of properties of the world that were beyond our conception—until we cooperated with machines. AI already transcends human perception—in a sense, through chronological compression or ‘time travel’: enabled by algorithms and computing power, it analyzes and learns through processes that would take human minds decades or even centuries to complete.”

Among the risks posed by AI, the authors point to AI’s possible role in escalating military conflict between nations: “AI increases the inherent risk of preemption and premature use escalating into conflict. A country fearing that its adversary is developing automatic capabilities may seek to preempt it….To prevent unintended escalation, major powers should pursue their competition within a framework of verifiable limits. Negotiation should not only focus on moderating an arms race but also making sure that both sides know, in general terms, what the other is doing….There will never be complete trust. But as nuclear arms negotiations during the Cold War demonstrated, that does not mean that no measure of understanding can be achieved….Defining the nature and manner of restraint on AI-enabled weapons, and ensuring restraint is mutual, will be critical.”

Explaining the impact of AI on security and world order, the authors observe: “The will to achieve mutual restraint on the most destructive capabilities must not wait for tragedy to arise. As humanity sets out to compete in the creation of new, evolving, and intelligent weapons, history will not forgive a failure to attempt to set limits. In the era of artificial intelligence, the enduring quest for national advantage must be informed by an ethic of human preservation….Collective action will be hard, and at times impossible, to achieve, but individual actions, with no common ethic to guide them, will only magnify instability. Those who design, train, and partner with AI will be able to achieve objectives on a scale and level of complexity that, until now, have eluded humanity—new scientific breakthroughs, new economic efficiencies, new forms of security, and new dimensions of social monitoring and control.”

The authors advocate forming a group composed of respected figures from the highest levels of government, business, and academe, to achieve two functions: “1. Nationally, it should ensure that the country remains intellectually and strategically competitive in AI, 2. Both nationally and globally, it should study, and raise awareness of, the cultural implications Al produces….Technology, strategy, and philosophy need to be brought into some alignment, lest one outstrip the others. What about traditional society should we guard? And what about traditional society should we risk in order to achieve a superior one?”

The authors conclude by observing: “The advent of AI, with its capacity to learn and process information in ways that human reason alone cannot, may yield progress on questions that have proven beyond our capacity to answer.…Human intelligence and artificial intelligence are meeting, being applied to pursuits on national, continental, and even global scales. Understanding this transition, and developing a guiding ethic for it, will require commitment and insight from many elements of society….This commitment must be made within nations and among them. Now is the time to define both our partnership with artificial intelligence and the reality that will result.”

As these excerpts suggest, the book addresses a wide spectrum of social and ethical implications, intended and unintended consequences, and the need for regulation of Artificial Intelligence. But beyond this broad-based and thoughtful evaluation, the key lessons regarding AI must include recognition that machines are subject to human design: human biases and programmed prejudices can be intrinsically transferred to algorithms. Indeed, transparency and scrutiny are required to create neutral algorithms. Looking at the big picture — a heartless and a spiritless machine devoid of compassion, humanity, and wisdom can never become equal to humans and exert total control over them! No matter how smart models and chatbots get and how extensively develop cloning ability, they can never outsmart their creators — humans themselves! These remain the ultimate decision makers! The biggest threat to humans is not posed by machines, but rather by their unethical applications by the self-same humans!

The Age of AI — and Our Human Future by Kissinger, Schmidt and Huttenlocher is essential reading for general readers, students, educators, and policy makers. The book can be used as a reference for academic programs in history of technology, Science Technology and Society (STS), Ethics, Business and public policy.

(Dr Ahmed S. Khan — dr.a.s.khan@ieee.org — is a Fulbright Specialist Scholar)

 


Back to Pakistanlink Homepage

 

Editor: Akhtar M. Faruqui