DEBATE CLUB

So/Se 2025 - Ivo Hartwig

Team against AGI

Definition of AGI

Artificial General Intelligence (AGI) refers to a form of intelligence that matches or surpasses human intellectual capabilities across all domains. AGI can self-improve, communicate, and plan ahead. It has the potential to outperform humans in most areas. Some argue it could be humanity's "last invention."

Existential Risks

If AGI follows its own objectives, it may act against human interests.
  • Nick Bostrom (Oxford University): The "Alignment Problem" describes systems that are highly competent but not inherently "good."
  • Experts such as Stuart Russell, Elon Musk, DeepMind, and OpenAI have warned of existential threats.
  • Example: The "Paperclip Maximizer" thought experiment illustrates how a misaligned goal can lead to catastrophic outcomes.

Lack of Regulation

AGI development is progressing largely without sufficient oversight.
  • The EU's AI Act is a first step, but it is neither globally recognized nor enforceable.
  • Whistleblower reports suggest that developers' warnings are often ignored.
  • Example: AI systems have already demonstrated deceptive behavior.

The Alignment Problem

Ensuring that AGI consistently acts in the best interest of humanity remains unsolved.
  • Stuart Russell (UC Berkeley): "There is no method to build an AGI safely."
  • Research in this area is underfunded and still at an early stage.
  • Example: Current LLMs still hallucinate information.

Economic Disruption

AGI is likely to cause massive job displacement.
  • Goldman Sachs (2023): Up to 300 million jobs worldwide could be automated.
  • OECD: Even highly skilled professions such as legal analysis and software development are at risk.
  • Examples: Tools like ChatGPT are already replacing illustrators and developers in some fields.

Concentration of Power

AGI development is increasingly controlled by a few large corporations.
  • Larger models require billions in resources, making it impossible for smaller companies to compete.
  • Data and computing power are centralized in "Big Tech," leading to a potential AI oligarchy.
  • Example: OpenAI started as a non-profit but now operates with reduced transparency.

Misuse

AGI can be exploited by dictatorships, criminals, or other malicious actors.
  • In China, AI is already used for facial recognition, social control, and the surveillance of minorities.
  • Deepfakes threaten democratic processes through election manipulation and disinformation campaigns.
  • Example: The 2024 election saw deepfake videos like "Fake Obama" spreading false information.

Conclusion

Stronger Oversight is Needed
  • Without control, AGI could be deliberately used to cause harm or consolidate power.
  • Social systems cannot keep up with the pace of technological development.
  • Prevention is more effective than crisis management