Philosophy of artificial intelligenceThe philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people (or, at least, artificial creatures; see artificial life) so the discipline is of considerable interest to philosophers.
Existential risk from artificial general intelligenceExistential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or another irreversible global catastrophe. One argument goes as follows: The human species currently dominates other species because the human brain possesses distinctive capabilities other animals lack. If AI were to surpass humanity in general intelligence and become superintelligent, then it could become difficult or impossible to control.
OpenAIOpenAI is an American artificial intelligence (AI) research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P.. OpenAI conducts research on artificial intelligence with the declared intention of developing "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work".
Robot ethicsRobot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as 'killer robots' in war), and how robots should be designed such that they act 'ethically' (this last concern is also called machine ethics). Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced.
Friendly artificial intelligenceFriendly artificial intelligence (also friendly AI or FAI) is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
RoboticsRobotics is an interdisciplinary branch of electronics and communication, computer science and engineering. Robotics involves the design, construction, operation, and use of robots. The goal of robotics is to design machines that can help and assist humans. Robotics integrates fields of mechanical engineering, electrical engineering, information engineering, mechatronics engineering, electronics, biomedical engineering, computer engineering, control systems engineering, software engineering, mathematics, etc.
SuperintelligenceA superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
Artificial general intelligenceAn artificial general intelligence (AGI) is a hypothetical type of intelligent agent. If realized, an AGI could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks. Creating AGI is a primary goal of some artificial intelligence research and of companies such as OpenAI, DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.
Google DeepMindDeepMind Technologies Limited, doing business as Google DeepMind, is a British-American artificial intelligence research laboratory which serves as a subsidiary of Google. Founded in the UK in 2010, it was acquired by Google in 2014, becoming a wholly owned subsidiary of Google parent company Alphabet Inc. after Google's corporate restructuring in 2015. The company is based in London, with research centres in Canada, France, and the United States.
Artificial intelligence arms raceA military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between global superpowers for better military AI, driven by increasing geopolitical and military tensions. An AI arms race is sometimes placed in the context of an AI Cold War between the US and China. Lethal autonomous weapons systems use artificial intelligence to identify and kill human targets without human intervention.
Artificial consciousnessArtificial consciousness (AC), also known as machine consciousness (MC), synthetic consciousness or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience. The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness (the ability to feel qualia).
Three Laws of RoboticsThe Three Laws of Robotics (often shortened to The Three Laws or Asimov's Laws) are a set of rules devised by science fiction author Isaac Asimov, which were to be followed by robots in several of his stories. The rules were introduced in his 1942 short story "Runaround" (included in the 1950 collection I, Robot), although similar restrictions had been implied in earlier stories. The Three Laws, presented to be from the fictional "Handbook of Robotics, 56th Edition, 2058 A.D.
Regulation of algorithmsRegulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging.
Military robotMilitary robots are autonomous robots or remote-controlled mobile robots designed for military applications, from transport to search & rescue and attack. Some such systems are currently in use, and many are under development. Broadly defined, military robots date back to World War II and the Cold War in the form of the German Goliath tracked mines and the Soviet teletanks. The introduction of the MQ-1 Predator drone was when "CIA officers began to see the first practical returns on their decade-old fantasy of using aerial robots to collect intelligence".
Machine ethicsMachine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers.
Algorithmic biasAlgorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm. Bias can emerge from many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. For example, algorithmic bias has been observed in search engine results and social media platforms.
DALL-EDALL-E (stylized as DALL·E) and DALL-E 2 are s developed by OpenAI using deep learning methodologies to generate s from natural language descriptions, called "prompts". DALL-E was revealed by OpenAI in a blog post in January 2021, and uses a version of GPT-3 modified to generate images. In April 2022, OpenAI announced DALL-E 2, a successor designed to generate more realistic images at higher resolutions that "can combine concepts, attributes, and styles". OpenAI has not released source code for either model.
Automated decision-makingAutomated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics.