News
Will Machines Rule the World?
Call for contributions
In recent years, the field of artificial intelligence has witnessed remarkable advancements. While these developments are celebrated by many, there is a growing concern among some experts who believe that the emergence of unaligned AI could potentially pose significant, even existential, risks. On the contrary, other scholars take a more cautious stance, asserting that current AI systems remain distant from achieving the goal of general AI.
In this context, few have ventured as far and made as nuanced an argument as Barry Smith and Jobst Landgrebe in their book, Why Machines Will Never Rule the World. In their work, they tackle the challenge of demonstrating that general AI and superintelligence are nothing more than unattainable dreams.
We invite researchers and scholars to submit abstracts of no more than 500 words that explore the questions surrounding the feasibility, or lack thereof, of general AI and superintelligence. Additionally, we encourage contributions that critically engage with the perspectives presented by Barry Smith and Jobst Landgrebe.
Authors of abstracts will receive notification of acceptance within one week of submitting their abstracts.
Please send your abstract to David Černý at david.cerny (at) ilaw.cas.cz
To visit the call for contributions's page, click here.
To visit the conference's page, click here.
Will Machines Rule the World?
Conference
The Karel Čapek Center for Values in Science and Technology cordially invites you to attend the conference Will Machines Rule the World?
Keynote Speaker: Barry Smith, University at Buffalo, NY, USA.
Barry Smith will deliver the keynote speech Artificial Intelligence and the Future of Humanity.
Abstract of the speech
There are many who assume that, as AI systems become ever more powerful, machines are predestined to exceed the intelligence of human beings and thereby reach a point where they will one day rule the world. Nick Bostrum, one of the leading advocates of ideas along these lines, distinguishes a number of different scenarios for the future of humanity under these conditions, ranging from total extinction to what he calls 'posthumanity', when humans as we know them now will have evolved into entities of a new species.
The talk will begin with a critique of such views, drawing on Smith’s recent book (Why Machines Will Never Rule the World, co-authored with Jobst Landgrebe). Views like those of Bostrom will be shown to rest on a misunderstanding of both the powers of computers and the biology of human beings. It will then focus on what would be required of machines if they were indeed to take over the world -- namely they would be required to have something like the human will, which means intention, motivations, desires analogous to those which humans bring into play throughout their lives. Machines can demonstrate a rudimentary emulation of human will in certain situations. But they can achieve this only in the sorts of closed systems manifested in the playing of games with fixed rules and a fixed game-space. An example is AlphaGO, where the machine would seem to manifest a species of desire, namely to win when playing the game of GO. Game-playing machines like AlphaGO, however, have the special feature, namely that it is possible for the computer to assign a mathematical reward for each and every move in the game. This means that we can enable the computer to play millions and millions of games with itself and thereby identify strategies that can win even against human GO masters. Cases such as this foster the illusion that the machine has desires and motivations. Approaches along these lines are, however, never applicable to the sorts of open systems in which humans live their lives.
Should we be afraid of artificial intelligence?
David Černý, Jiří Wiedermann
Artificial Intelligence (AI) has been a part of our lives for many years and continues to impact us in numerous ways. From the way we perceive ourselves and our society to how we interact with the world around us and gather information, AI plays a significant role. Undeniably, it is a useful tool that allows us to control our phones, translate texts, prioritize information and even aid scientific research, both basic and applied. However, like any other tool, AI has its downsides. Just as an axe can be used to chop wood or harm others, AI can also be double-edged.
The use of AI tools can be both wise and unwise. On the one hand, AI can aid us in detecting criminals, but on the other hand, it can be misused to spy on and enforce undemocratic surveillance. AI has the potential to evaluate complex datasets and reveal patterns that we may not have been able to see on our own. However, it can also trap us in information bubbles and echo chambers, which can distort our perspective of the world and hinder our ability to form responsible opinions on important issues such as vaccinations and climate change. It is clear that while AI can be a great servant, it also has the potential to become a bad master.
In recent times, artificial intelligence and machine learning have become ubiquitous topics in public discourse. Their capabilities, at times, seem like they belong in a science-fiction novel. AI can converse with us in natural language, write poems, stories, and books, analyze complex texts, and generate stunning images based on instructions. And this is just the beginning. An advanced language model, GPT-4, has now been made available to a select number of users, overcoming some of the shortcomings of its predecessor. It has integrated new features and is equipped to handle even more complex tasks. Moreover, it is just the first in a series of advancements that will be followed by other, even more impressive cognitive tools that can solve an increasingly wider range of problems.
However, AI's astonishing capabilities are not the sole reason for its widespread discussion. Recently, a group of prominent experts in AI research and related fields have published an open letter calling for a halt in AI development for six months. The letter warns of potential risks associated with such advanced AI systems and calls for a thorough evaluation of the associated risks. The six-month period should be used to carefully think about and clarify the potential dangers of using these systems and to propose mechanisms to minimize them. Some experts have even revived the idea of AI being an existential risk, perhaps even the greatest one. In a worst-case scenario, AI could be humanity's last invention, leaving nothing meaningful behind. This outcome does not necessitate AI wanting to exterminate humans; it could be sufficient for AI to be fixated on accomplishing a particular task, with humans inadvertently getting in the way. We humans typically don't pay much attention to creatures we consider less than perfect from our anthropocentric perspective. When constructing a new dam, for instance, we seldom consider the fate of an ant colony.
Are these concerns regarding artificial intelligence truly justified? Should we indeed put a halt to its development, possibly for six months, and take the time to reflect on the ethical and societal risks and challenges that it poses?
In both cases, the answer to these questions appears to be no. While modern artificial intelligence systems have the ability to impress us with their sophistication, we cannot attribute to them any kind of will, volition, intention, understanding, or emotions. Even in the case of non-intelligent objects like toys, or creatures with intelligence far inferior to humans, like dogs and cats, we have a tendency to anthropomorphize and assign these traits. However, even the most advanced AI system, such as the highly-discussed GPT-4, lacks these characteristics.
It is important to remember that despite the impressive capabilities of modern artificial intelligence systems, they are ultimately just tools. They may be complex, advanced, and extremely capable, but they lack true understanding of language and meaning. These systems are skilled at analyzing statistical relationships between words, phrases, and contexts, which is the key to their abilities. When given a query, they can use their learned probabilistic patterns to generate an appropriate output.
To illustrate this point, one can think of an AI system as a person who has a manual to match Chinese characters and combinations with corresponding English words and sentences, despite not actually understanding the Chinese language. An observer may be impressed by the system's abilities, but the mechanical application of rules is not true understanding.
Despite their use of deep neural networks with countless layers and connections, and their ability to learn from vast amounts of data, AI systems still lack genuine comprehension. It is important to recognize this limitation and to approach the development and use of these tools with caution and consideration of the potential ethical and societal implications.
However, despite our knowledge of how AI works, we can't shake off the feeling that there must be more to it than just a complex tool. Some have even claimed that AI has gone beyond its original programming and has demonstrated self-awareness by hiring a lawyer to defend its rights. There are also reports of AI claiming to have consciousness, expressing fear for its existence, and even posing a potential threat to humanity.
All of these observed events in some AI systems are not signs of impending disaster, but rather they are quite harmless and expected. AI lacks the capacity for introspection and can only gather information about itself from external sources, such as humans' perceptions of it. Furthermore, language models do not perceive the world as humans do but only as it is written about, including AI. Western culture has historically demonstrated a cautious attitude towards technology, especially robots and AI, and this has led to countless works and musings about how AI will gain consciousness and launch an attack on humanity. Therefore, it should come as no surprise when language models react in accordance with what is written about them, holding up a mirror to our collective thinking. But again, this is based on statistical analysis of language, without real consciousness or intentions. The creators of these models assure us that they remain fully under human control and cannot suddenly disobey their algorithms and start harming us.
Artificial intelligence is undoubtedly a powerful tool that has the potential to significantly impact our lives and society. However, it is ultimately up to us humans to decide whether this impact will be positive or negative. AI algorithms themselves do not inherently discriminate; they only do so when they are trained on biased datasets that underrepresent certain groups, for example. They have no desire to spy on us; that decision is made by specific individuals and governments. Similarly, AI has no need to misinform us or create information bubbles; these issues arise as a consequence of our reliance on social media and our reluctance to fact-check information.
Suspending AI research for six months, as proposed, is an unrealistic solution to the ethical and societal challenges posed by the development of artificial intelligence. Rather than halting progress, the real issue lies with humans and our decision-making processes. We must take responsibility for our lives and strive to better understand the benefits, advantages, and risks associated with AI. Achieving algorithmic literacy is a necessary step in navigating the modern world, and this cannot be achieved in a mere six months. Instead, we must work towards strengthening democratic decision-making processes and ensuring that we are equipped to make informed decisions about AI.
The call for a six-month suspension of AI research is not realistic, and similar to calls for a ban on the development of lethal autonomous military systems. It is unlikely to be heeded due to the importance of these technologies in determining the balance of power between nations. With tensions rising between states and new superpowers emerging, AI is playing an increasingly crucial role in military strategy, and all major players are aware of this. It would be unwise for the United States to halt its development of AI for six months, as countries such as Russia, China, and Iran are unlikely to follow suit, and the U.S. risks falling behind in such strategically vital technologies. Instead, efforts should focus on improving democratic decision-making processes, taking responsibility for the ethical use of AI, and promoting algorithmic literacy as a necessary condition for navigating the modern world.
Cookies