-->

A.I.

Artificial Intelligence

AI in General

  • Encyclopedia entries:
  • The introduction to Stuart Russell and Peter Norvig's leading AI textbook is available online here
  • CS Researchers Marcus Hutter and Shane Legg have collected ~70 definitions of intelligence from various sources. The collection is available on ArXiv
  • MindPapers' section on AI is here.
  • The Philpapers category for the philosophy of AI is here
  • the Philpapers category for The Philosophy of Computing and Information

The Nature of AI Research

History of AI
  • The Quest For Artificial Intelligence, a book by Nils Nilsson on the history of the field published by Cambridge University Press, is available online here
Technical and Semi-technical
  • Talking Machines, a podcast about machine learning
  • Pages on related topics from the SEP:
  • Articles from Scholarpedia:
    Technological State of Play
    It is difficult to keep this particular section of NotNotPhilosophy.com up-to-date as new developments appear so quickly. But recent illustrations of the rapid march of technological progress include the following:

    • Make this less robotics-focussed (what is with Doug’s obsession with robotic motility?) self-driving cars, novel learning procedures, machine classification, expert systems, Big Data.
    • Animal-inspired robots coming out of Boston Dynamics:
    • IBM's Watson, the Jeopardy champ:
    • A documentary on Watson, in four videos starting here
    • An analysis of how Watson answers a questions
    • A (simulated) robot learning to walk from scratch
    • A robotic arm catching things:
    • Computer game-playing has historically received much attention in AI. Chess programs can now beat any human, and the focus is now on Computer Go, the most difficult deterministic perfect-information game to program computers to play. The best Go programs have a ranking of 5-6 dan on the KGS Go Server (strong amateur, the equivalent of somewhere above master rank in chess). (UPDATE: Google Deepmind has recently combined neural network value networks, policy networks, and standard Monte Carlo techniques to create AlphaGo; "Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away". Fan Hui, the European Champion, is a 2nd dan professional player. In Go, professional dan ranks go from 1st to 9th dan. 9th dan is approximately equivalent to a chess grandmaster. Professional dan ranks are distinguished from amateur dan ranks, and are denoted by a p. Fan Hui is a 2p go player, while Lee Sobel, who AlphaGo is scheduled to play, is a 9p player and a strong candidate for being best active player (Go does not have a single world championship, so there is no official title).
    • Google’s self-driving cars: HuffPo article, Tech Review, Google 4/2014, Google 5/2014
    • This video of a (simulated) robot learning to walk is an interesting example of machine learning techniques
  • Turing Test

    • Open access copies of Turing's writings:
      • Computing Machinery and Intelligence (Turing, 1950): here. (This is the classic article in which Turing proposed the "Turing Test".)
      • The Turing Digital Archive: an extensive archive of letters, photographs, newspaper articles, and unpublished papers by or about Alan Turing.
    • Encyclopedia entries:
      • The Turing Test: SEP
      • Turing: SEP
    • Useful sites and links:
      • MindPapers' section on the Turing Test is here.
      • Alan Turing: The Enigma is an large and interesting website from Turing's biographer, Andrew Hodges.
      • MacroVU produced a wonderful argument map about the Turing test. (web.archive link)
    • The Turing test in practice:
    • The two clips -- here and here -- constitute the 1994 BBC Horizon documentary "The Strange Life and Death of Dr Turing". (Embedding is disabled on these clips, unfortunately!)

    Connectionism

    Computation

    Strong or General AI v Weak or Narrow AI

    • Encyclopedia entries:
    • Pei Wang’s introduction to AGI
    • An interview on AGI with Pei Wang conducted by Ben Goertzel both, leading AGI researchers.

    Challenges to Strong AI

  • Philpapers category for Can Machines Think?
    The Chinese Room argument
      The Chinese Gym argument, also due to Searle, is an extension or modification of the Chinese Room argument to apply to neural networks.
    AI and Gödel
    • Major online articles on Gödelian arguments against AI include:
      • John Lucas (1961) Minds, Machines and Gödel: here.
      • John Lucas (1996) Minds, Machines and Gödel: A Retrospect: here.
      • Roger Penrose (1996) Beyond the Doubting of a Shadow: here.
    • Encyclopaedia entries:
      • The Lucas-Penrose Argument About Gödel's Theorem: IEP
      • Gödel's Incompleteness Theorems [Gödelian Arguments Against Mechanism]: SEP)
      • The Church-Turing Thesis [Misunderstandings of the Thesis]: SEP
      • Turing Machines [What Cannot be Computed]: SEP
      • Quantum Computing - Why standard QM is just as subject to Gödelian concerns as classical computing:
      • Quantum Approaches to Consciousness [Penrose and Hameroff: Quantum Gravity and Microtubuli]: SEP
    • Useful links:
      • MindPapers' section on Gödelian arguments is here.
      • MacroVU has produced a wonderful map of the arguments pro and con the mathematical possibility of thinking computers. (web.archive link)
      • John Lucas is (along with Roger Penrose) one of the two major proponents of the idea that Gödel's theorems imply that a digital computer cannot possess problem-solving powers equal to those of the human mind. All his articles on the subject are online here.
    Frame Problem
  • AI futurism

    AI timelines
    • AI Impacts is a project investigating the development of AI, notable primarily for their analyses of various published predictions and arguments about AI timelines. They have a list of expert opinion surveys, several analyses of such surveys and predictions.
    • A survey on ‘Time to Human-Level AI’, conducted by Ben Goertzel and Seth Baum: http://www.hplusmagazine.com/articles/ai/how-long-till-human-level-ai
    • Luke Muehlhauser (Givewell) compiled a report, "What do we know about AI timelines" for the Open Philanthropy Project, relying heavily on AI Impacts work.
    Singularitarianism
    • A presentation on the three schools given at the 2007 Singularity Summit
    • The concept of the Intelligence Explosion is (broadly) associated with I.J Good, whose seminal 1965 paper 'Speculations Concerning the First Ultraintelligent Machine' is available here. The Oxford philosopher Nick Bostrom is the primary modern figure, from his 1998 paper 'How Long Until Superintelligence' to his 2014 monograph 'Superintelligence: Paths, Dangers, & Strategies'.
    • 'Intelligence Explosion: Evidence and Import' is a chapter by Luke Muelhauser and Anna Salamon in 'Singularity Hypotheses'. It is available online here
    • Chalmers’ paper 'The Singularity: A Philosophical Analysis' is available online here
    • The Event Horizon School is primarily associated with Verner Vinge, who coined the term 'Singularity' as an analogy to the singularity in a black hole, which caused then-current theories about physics to break down. He developed his point further in his 1993 article 'The Coming Technological Singularity: How to Survive in the Post-Human Era', which is online here

    Ethics of AI

    • Encyclopedia entries:
      • SEP’s article on
      • IEP
    • Relevant Philpapers and Mindpapers Categories:
      • The Philpapers category for Ethics of Artificial Intelligence
      • Nick Bostrom and Eliezer Yudkowsky’s ‘The Ethics of Artificial Intelligence’, published in The Cambridge Handbook of Artificial Intelligence, is available online here

    Dangers of AI

    • 'Global Catastrophic Risks' has a chapter on AI, available online here
    • io9 has this very nice informal introductory discussion of the "superintelligence control problem" (i.e., how do you make sure that a machine won't run amok if it is vastly more intelligent than you are?). A related paper, Why We Need Friendly AI (Muehlhauser and Bostrom, 2014) is online, here.
    • An open letter on research priorities surrounding risks and benefits from AI, signed by Stuart Russell, Stephen Hawking, Peter Norvig, Elon Musk, and Nick Bostrom, as well as many experts in AI, is available along with its list of signatories here. Its associated research priorities document is here.
    • In the following talk (~1hr 12 mins, run by Authors@Google), Nick Bostrom (of Oxford's Future of Humanity Institure) discusses his book on superintelligence and its associated dangers. A similar ~55min talk run by Harvard Effective Altruism is here. Alternatively, he has appeared on several podcasts, including that of the Royal Society of Arts, Econtalk, and philosophy podcast The Partially Examined Life