Self-Aware Systems

  1. Steve Omohundro
    http://selfawaresystems.com

    Will emerging technologies lead to greater cooperation or to more conflict? As we get closer to true AI and nanotechnology, a better understanding of cooperation and competition will help us design systems that are beneficial for humanity.

    Recent developments in both biology and economics emphasize cooperative interactions as well as competitive ones. The “selfish gene” view of biological evolution is being extended to include synergies and interactions at multiple levels of organization. The “competitive markets” view of economics is being extended to include both cooperation and competition in an intricate network of “co-opetition”. Cooperation between two entities can result if there are synergies in their goals, if they can avoid dysergies, or if one or both of them is compassionate toward the other. The history of life is one of increasing levels of cooperation. Organelles joined to form eukaryotic cells, cells joined to form multi-cellular organisms, organisms joined into hives, tribes, and countries. Many perceive that a kind of “global brain” is currently emerging. Each new level of organization creates structures that foster cooperation at lower levels.

    In this talk I’ll discuss the nature of cooperation in general and then tackle the issue of creating cooperation among intelligent entities that can alter their physical structures. Single entities will tend to organize themselves as energy-efficient compact structures. But if two or more such entities come into conflict, a new kind of “game theoretic physics” comes into play. Each entity will try to make its physical structure and dynamics so complex that competitors must waste resources to sense it, represent it, and compete with it. A regime of “Mutually Assured Distraction” would use up resources on all sides and provides an incentive to create an alternative regime of peaceful coexistence. The asymmetry in the difficulty of posing problems versus solving them (assuming P!=NP) appears to allow some range of weaker entities to coexist with stronger entities. This gives us a theoretical basis for constructing stable peaceful societies and ecosystems. We discuss some possible pathways to that end.

    # vimeo.com/4054075 Uploaded 233 Plays 0 Comments
  2. Steve Omohundro
    selfawaresystems.com

    Will emerging technologies lead to greater cooperation or to more conflict? As we get closer to true AI and nanotechnology, a better understanding of cooperation and competition will help us design systems that are beneficial for humanity.

    Recent developments in both biology and economics emphasize cooperative interactions as well as competitive ones. The “selfish gene” view of biological evolution is being extended to include synergies and interactions at multiple levels of organization. The “competitive markets” view of economics is being extended to include both cooperation and competition in an intricate network of “co-opetition”. Cooperation between two entities can result if there are synergies in their goals, if they can avoid dysergies, or if one or both of them is compassionate toward the other. The history of life is one of increasing levels of cooperation. Organelles joined to form eukaryotic cells, cells joined to form multi-cellular organisms, organisms joined into hives, tribes, and countries. Many perceive that a kind of “global brain” is currently emerging. Each new level of organization creates structures that foster cooperation at lower levels.

    In this talk I’ll discuss the nature of cooperation in general and then tackle the issue of creating cooperation among intelligent entities that can alter their physical structures. Single entities will tend to organize themselves as energy-efficient compact structures. But if two or more such entities come into conflict, a new kind of “game theoretic physics” comes into play. Each entity will try to make its physical structure and dynamics so complex that competitors must waste resources to sense it, represent it, and compete with it. A regime of “Mutually Assured Distraction” would use up resources on all sides and provides an incentive to create an alternative regime of peaceful coexistence. The asymmetry in the difficulty of posing problems versus solving them (assuming P!=NP) appears to allow some range of weaker entities to coexist with stronger entities. This gives us a theoretical basis for constructing stable peaceful societies and ecosystems. We discuss some possible pathways to that end.

    # vimeo.com/4144443 Uploaded 116 Plays 1 Comment
  3. Steve Omohundro
    selfawaresystems.com

    Will emerging technologies lead to greater cooperation or to more conflict? As we get closer to true AI and nanotechnology, a better understanding of cooperation and competition will help us design systems that are beneficial for humanity.

    Recent developments in both biology and economics emphasize cooperative interactions as well as competitive ones. The “selfish gene” view of biological evolution is being extended to include synergies and interactions at multiple levels of organization. The “competitive markets” view of economics is being extended to include both cooperation and competition in an intricate network of “co-opetition”. Cooperation between two entities can result if there are synergies in their goals, if they can avoid dysergies, or if one or both of them is compassionate toward the other. The history of life is one of increasing levels of cooperation. Organelles joined to form eukaryotic cells, cells joined to form multi-cellular organisms, organisms joined into hives, tribes, and countries. Many perceive that a kind of “global brain” is currently emerging. Each new level of organization creates structures that foster cooperation at lower levels.

    In this talk I’ll discuss the nature of cooperation in general and then tackle the issue of creating cooperation among intelligent entities that can alter their physical structures. Single entities will tend to organize themselves as energy-efficient compact structures. But if two or more such entities come into conflict, a new kind of “game theoretic physics” comes into play. Each entity will try to make its physical structure and dynamics so complex that competitors must waste resources to sense it, represent it, and compete with it. A regime of “Mutually Assured Distraction” would use up resources on all sides and provides an incentive to create an alternative regime of peaceful coexistence. The asymmetry in the difficulty of posing problems versus solving them (assuming P!=NP) appears to allow some range of weaker entities to coexist with stronger entities. This gives us a theoretical basis for constructing stable peaceful societies and ecosystems. We discuss some possible pathways to that end.

    # vimeo.com/5105510 Uploaded 113 Plays 0 Comments
  4. convergence08.org

    To kick off Convergence08, we'll hear a very different AI debate: not whether to create AI, or which technical path will work fastest, but "How can we use AI technology to build the world we want to live in?" Four AI pundits thrash it out, and then we all join in!

    Peter Norvig is a Fellow of the American Association for Artificial Intelligence and the Association for Computing Machinery. At Google he was Director of Search Quality, responsible for the core web search algorithms from 2002-2005, and has been Director of Research from 2005 on. Previously he was the head of the Computational Sciences Division at NASA Ames Research Center, making him NASA's senior computer scientist. He received the NASA Exceptional Achievement Award in 2001. He served as a research faculty member at the UC Berkeley Computer Science Department, from which he received a Ph.D. in 1986 and the distinguished alumni award in 2006. He has over fifty publications in Computer Science, concentrating on Artificial Intelligence, Natural Language Processing and Software Engineering, including the book Artificial Intelligence: A Modern Approach (the leading textbook in the field). He is also the author of the Gettysburg Powerpoint Presentation and the world's longest palindromic sentence.

    Steve Omohundro has had a wide-ranging career as a scientist, university professor, author, software architect, and entrepreneur. He received a Ph.D. in Physics from UC Berkeley, and published Geometric Perturbation Theory in Physics based on his thesis. At Thinking Machines, he co-developed Star Lisp, the programming language for the massively parallel Connection Machine. He was a computer science professor at the University of Illinois at Champaign/Urbana where he co-founded the Center for Complex Systems Research. He wrote the three-dimensional graphics portion of Wolfram Research's Mathematica program as one of the original seven developers. At the International Computer Science Institute in Berkeley, he led an international team in developing the object-oriented programming language Sather. He also developed a variety of novel neural network techniques and machine learning algorithms and built systems which learned to read lips, control robots, and learn grammars. He is the founder and president of Self-Aware Systems, founded to develop a new kind of software that programs itself.

    Ben Goertzel is founder, CSO and CEO of Novamente, a software company aimed at creating applications in the area of natural language question-answering; and director of research at the Singularity Institute for Artificial Intelligence, overseeing the Open Cognition Project. He has over 70 publications, concentrating on cognitive science and AI, including Chaotic Logic, Creating Internet Intelligence, Artificial General Intelligence (edited with Cassio Pennachin), and The Hidden Pattern. He also oversees Biomind, an AI and bioinformatics firm that licenses software for bioinformatics data analysis to the NIH's National Institute for Allergies and Infectious Diseases, and CDC. Ben has a Ph.D. in mathematics from Temple University, and has held several university positions in mathematics, computer science, and psychology, in the US, New Zealand, and Australia.

    Barney Pell is founder of Powerset and search strategist and evangelist at Microsoft. For over 15 years, he has pursued groundbreaking technical and commercial innovation in AI as a researcher, research manager, business strategist and entrepreneur. He spent 2005 as Entrepreneur in Residence at Mayfield evaluating early to mid-stage IT and knowledge based companies. Prior to Mayfield, he worked for NASA Ames Research Center on two occasions: from 1993-1998 as Project Lead for the Executive component of the prize-winning Remote Agent Experiment; and from 2002-2005 as Area Manager responsible for research in intelligent agents, software architecture, human-centered computing, search, collaborative knowledge management, distributed databases, spoken dialog systems, and the semantic web. Barney holds a Ph.D. in Computer Science from Cambridge University.

    # vimeo.com/3162797 Uploaded 1,423 Plays 1 Comment
  5. At the AGI-08 post-conference workshop, Steve Omohundro of Self-Aware Systems (http://selfawaresystems.com) presents on his paper "The Basic AI Drives." One might imagine that AI systems with harmless goals will be harmless. The paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of ?drives? that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted. We start by showing that goal-seeking systems will have drives to model their own operation and to improve themselves. Self-improving systems will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior. This will lead almost all systems to protect their utility functions from modification and their utility measurement systems from corruption. The paper discusses some exceptional systems which will want to modify their utility functions. We next discuss the drive toward self-protection which causes systems try to prevent themselves from being harmed. Finally we examine drives toward the acquisition of resources and toward their efficient utilization. We end with a discussion of how to incorporate these insights in designing intelligent technology which will lead to a positive future for humanity.

    # vimeo.com/2163084 Uploaded 198 Plays 0 Comments

Self-Aware Systems

Jeriaska

http://selfawaresystems.com

Self-Aware Systems is a think tank that develops innovative approaches to a more productive and cooperative human future. We are at an amazing and critical juncture in human history. We face many challenges: economic instability,…


+ More

http://selfawaresystems.com

Self-Aware Systems is a think tank that develops innovative approaches to a more productive and cooperative human future. We are at an amazing and critical juncture in human history. We face many challenges: economic instability, overpopulation, shortages of water, oil, and raw materials, disease, terrorism, war, and the destruction of species and ecosystems (eg. see Jeffrey Sachs, “Common Wealth: Economics for a Crowded Planet“). But we are also in the midst of scientific and technological revolutions that may dramatically improve our lives. It is remarkable that simultaneous breakthroughs are occurring in biology, neuroscience, psychology, artificial intelligence, nanotechnology, and fundamental physics. If we navigate this period of remarkable change with wisdom, we have the potential to create a compassionate and empowering future.

Self-Aware Systems aims to promote cooperation at all levels of interaction through careful design of new technologies. The same fundamental cooperative principles apply to biological systems, psychological dynamics, group interaction, business dynamics, economic systems, ecosystems, and political institutions. As interactions become increasingly computational, we have a tremendous opportunity to create more beneficial outcomes by designing new interaction rules and institutions. For example, new incentive structures can promote more productive cooperative behavior in business and new computation-based social structures can promote more peaceful and productive global interactions. Some of the technologies involved include incentive design, formalized contracts, formal provenances for trust establishment, provably limited monitors, oblivious computation, and commitment mechanisms based on formal goal and source code revelation. Many of these technologies have both immediate applications and serve to create the infrastructure needed to build an advanced cooperative society in the future.

Browse This Channel

Shout Box

Heads up: the shoutbox will be retiring soon. It’s tired of working, and can’t wait to relax. You can still send a message to the channel owner, though!

Channels are a simple, beautiful way to showcase and watch videos. Browse more Channels.