Daniel Goodwin and I met at Mexico City’s airport stepping off the same flight on our way to the Helena Summit. Driven in a large black American SUV, we got so immersed in a discussion about the power and limits of artificial intelligence to barely notice the unfolding tapestry of the world's fifth-largest (at 21.8 million people) metropolis, getting mere glimpses of its intriguing beauty and legendary underbelly. I must return as a serious tourist, I told myself. After all, Frida Kahlo and Diego Rivera were from this place. It took about two hours of driving through the streets of the city to get to the highway to the mountains.
We were heading towards a place called Valle de Bravo, and a convening of about 50 experts put together by a special philanthropy called Helena, to explore critical challenges spanning biosecurity, mental health and AI. Our hosts opened their ranch and hearts to the visitors arriving from around the globe. In my first encounter with Mexican hospitality, the warmth of it reminded me of how Cape Town’s people at their best welcome tourists!
Valle de Bravo is southwest of Mexico City, in the mountainous rainforest, at 1,826m above sea level (the sixth highest in the world; for reference, Addis Ababa is at 2,355m, Johannesburg 1,735m and Denver 1,609m) on Lake Avandaro. It is breathtaking and, at that altitude, leaves one breathless. On a good day, the sun is out in the morning, it pours mid-afternoon, reminding me of Johannesburg’s summer thunderstorms, particularly the rich smell of rain-soaked earth. We held our meetings under cover outdoors, with an incredibly diligent support staff navigating the shifting weather.
We had AI experts, Fortune 500 CEOs, Nobel Laureates, NGO leaders and leading academics tackle the greatest challenge of our time: how to intervene, mid-flight in a manner of speaking, in a system to keep superintelligence safe without sacrificing the core values of a free and open society. Mid-flight because AI is no longer just a product and a tool, it is now a part of our information generating infrastructure, more ubiquitous by the day, and galloping at a pace faster than (Gordon) Moore’s law. We are struggling to govern it, worrying that that may in fact not be possible.
The science community expects unprecedented advances in AI to have deep impacts in the biosciences, chemistry and material sciences. Advances hold major promise for better health and wellness (AI speeds up diagnostics, surveillance and vaccine R&D) and also for the environment and climate (AI-directed design of new battery types makes intermittent power sources like wind and solar more practical; enables new forms of catalysis for carbon dioxide removal at scale; and could lead to breakthroughs and alternatives to carbon-intensive concrete production responsible for 8% of global emissions today).
The risks, however, especially in the biosciences, could equal or even outweigh the benefits. Advances in vaccine research and development, and desktop computing, could render making bioweapons easier, which is why many biotech firms are introducing biosecurity programmes to prevent that from happening.
The design of AI-assisted synthetic compounds released directly into the environment could restore degraded soil and improve agricultural productivity and food security, but it could also detrimentally alter ecosystems.
Of course, task-specific AI applications as described above are one thing, but building “artificial general intelligence” (AGI), a brain of sorts, is quite another. Major countries and corporations are locked in an accelerated drive today to build AGI and the fear is that whoever gets there first will achieve permanent dominance. To prevent chaos, we may need centralised control, global monitoring and limits on who can build AGI, increasing the risk of having an oppressive surveillance state. Conversely, keeping AI open and decentralised may protect individual freedoms, but it increases the risk of accidents or misuse.
Helena Summit attendees also grappled with what has been described as “a pandemic of disconnection”, the severe level of social isolation and hopelessness engendered by the self-deceiving allure of virtual reality, ease of access to opioids and alcohol indulgence. In the US, the situation is acute, especially among young people. According to US CDC data from 2021, 42% of high school students reported persistent feelings of sadness or hopelessness. Even more alarming, 22% reported seriously considering suicide in the past year, and 10% reported trying.
What is to be done? There are four challenges to confront:
- How do we shift the race to be first at “artificial general intelligence” to having a shared international project? Given the deep mistrust between global powers, what would it take to convince the key players to prioritise global safety over winning?;
- Is it possible to build a moral compass (called “alignment” in the tech world) into AGI? Who gets to build it? Do the people currently building AGI have the right to define what “good” means for all humanity? And is such an outcome even possible?;
- Are we as a civilisation wise enough to handle the power we are creating? And if AI eventually takes over the work of thinking and inventing, will humans lose their purpose and ambition, or will we be freed up for something greater?; and
- In a world where any media can be perfectly faked, what becomes the basis for shared reality? How do we navigate daily life when we cannot trust what we see, hear or read? Is there a way to rebuild collective truth in the age of perfect forgery?
The answers to the challenges are partly technical, but in the end will require the will, wisdom and caring only good democratic leadership can provide. The US historian Arthur Schlesinger Jnr, adviser to President John Kennedy in the 1960s, wrote that “[t]he mission of democratic statecraft is to keep institutions and values sufficiently abreast of the accelerating velocity of history to give societies a chance of controlling the energies let loose by science and technology. Democratic leadership is the art of fostering and managing innovation in the service of a free community.” (Cycles of American History, p 422)
The Helena Summit produced extraordinary coincidences. I met Andrew Zuckerman, whose photograph of Nelson Mandela appears on the cover of the book Kader Asmal, David Chidester and I co-edited, titled Mandela In His Own Words (Boston, Little Brown & Co, 2003 and 2017). It was wonderful to meet up with Larry Diamond, whom I last saw about 30 years ago at a time when we were building South Africa’s democracy. And then there is Daniel Goodwin, engineer, scientist and entrepreneur in service of the social good. Our bond was made on the road from Mexico City’s airport to Valle de Bravo and, as it turned out, the return journey to Boston too.
