7 octombrie 2024

Mimicking the brain: Deep learning meets vector-symbolic AI

17 min read

Neuro Symbolic AI: Enhancing Common Sense in AI

symbolic ai vs neural networks

Natural language understanding, in contrast, constructs a meaning representation and uses that for further processing, such as answering questions. The logic clauses that describe programs are directly interpreted to run the programs specified. No explicit series of actions is required, as is the case with imperative programming languages.

Currently, Python, a multi-paradigm programming language, is the most popular programming language, partly due to its extensive package library that supports data science, natural language processing, and deep learning. Python includes a read-eval-print loop, functional elements such as higher-order functions, and object-oriented programming that includes metaclasses. In this line of effort, deep learning systems are trained to solve problems such as term rewriting, planning, elementary algebra, logical deduction or abduction or rule learning. These problems are known to often require sophisticated and non-trivial symbolic algorithms. Attempting these hard but well-understood problems using deep learning adds to the general understanding of the capabilities and limits of deep learning. It also provides deep learning modules that are potentially faster (after training) and more robust to data imperfections than their symbolic counterparts.

Leveraging AI in military decision-making processes enhances battlefield effectiveness and improves the quality of critical operational decisions. The combination of neural networks and symbolic reasoning has the potential to revolutionize military operations by significantly improving threat detection accuracy and enabling faster, more precise tactical decision-making. This paper provides a thorough analysis that offers valuable insights for researchers, practitioners, and military policymakers who are concerned about the future of AI in warfare. Through a critical examination of existing research, key challenges are identified, and promising directions for future development are outlined. This aims to further empower the responsible deployment of Neuro-Symbolic AI in areas such as optimized logistics, enhanced situational awareness, and dynamic decision-making.

Why The Future of Artificial Intelligence in Hybrid? – TechFunnel

Why The Future of Artificial Intelligence in Hybrid?.

Posted: Mon, 16 Oct 2023 07:00:00 GMT [source]

It performed at a level comparable to experts and was able to consider different symptoms, patient history, and other factors through its rule-based system. For example, AI developers created many rule systems to characterize the rules people commonly use to make sense of the world. This resulted in AI symbolic ai vs neural networks systems that could help translate a particular symptom into a relevant diagnosis or identify fraud. One of the biggest is to be able to automatically encode better rules for symbolic AI. „There have been many attempts to extend logic to deal with this which have not been successful,” Chatterjee said.

Like in so many other respects, deep learning has had a major impact on neuro-symbolic AI in recent years. This appears to manifest, on the one hand, in an almost exclusive emphasis on deep learning approaches as the neural substrate, while previous neuro-symbolic AI research often deviated from standard artificial neural network architectures [2]. However, we may also be seeing indications or a realization that pure deep-learning-based methods are likely going to be insufficient for certain types of problems that are now being investigated from a neuro-symbolic perspective. Neuro-Symbolic AI can enable AI systems to reason about everyday situations, making them better at understanding context [62, 63]. This is because Neuro-Symbolic AI models combine the strengths of neural networks and symbolic reasoning. One key aspect of commonsense reasoning is counterfactual reasoning, which allows the AI to consider alternative scenarios and their potential outcomes [64].

By combining deep learning neural networks with logical symbolic reasoning, AlphaGeometry charts an exciting direction for developing more human-like thinking. Early deep learning systems focused on simple classification tasks like recognizing cats in videos or categorizing animals in images. However, innovations in GenAI techniques such as transformers, autoencoders and generative adversarial networks have opened up a variety of use cases for using generative AI to transform unstructured data into more useful structures for symbolic processing. Now, researchers are looking at how to integrate these two approaches at a more granular level for discovering proteins, discerning business processes and reasoning. Symbolic processes are also at the heart of use cases such as solving math problems, improving data integration and reasoning about a set of facts. For other AI programming languages see this list of programming languages for artificial intelligence.

Challenges and Limitations

The research community is still in the early phase of combining neural networks and symbolic AI techniques. Much of the current work considers these two approaches as separate processes with well-defined boundaries, such as using one to label data for the other. Innovations in backpropagation in the late 1980s helped revive interest in neural networks.

symbolic ai vs neural networks

As a result, numerous researchers have focused on creating intelligent machines throughout history. For example, researchers predicted that deep neural networks would eventually be used for autonomous image recognition and natural language processing as early as the 1980s. We’ve been working for decades to gather the data and computing power necessary to realize that goal, but now it is available. Neuro-symbolic models have already beaten cutting-edge deep learning models in areas like image and video reasoning. Furthermore, compared to conventional models, they have achieved good accuracy with substantially less training data. The AI-powered battlefield of the future will be driven by Neuro-Symbolic AI, revolutionizing warfare.

This enables neuro-symbolic AI models to reason about the world and make predictions that are more consistent with human understanding. Historically, the two encompassing streams of symbolic and sub-symbolic stances to AI evolved in a largely separate manner, with each camp focusing on selected narrow problems of their own. Originally, researchers favored the discrete, symbolic approaches towards AI, targeting problems ranging from knowledge representation, reasoning, and planning to automated theorem proving.

This directed mapping helps the system to use high-dimensional algebraic operations for richer object manipulations, such as variable binding — an open problem in neural networks. When these “structured” mappings are stored in the AI’s memory (referred to as explicit memory), they help the system learn—and learn not only fast but also all the time. The ability to rapidly learn new objects from a few training examples of never-before-seen data is known as few-shot learning. By augmenting and combining the strengths of statistical AI, like machine learning, with the capabilities of human-like symbolic knowledge and reasoning, we’re aiming to create a revolution in AI, rather than an evolution. Logical Neural Networks (LNNs) are neural networks that incorporate symbolic reasoning in their architecture. In the context of neuro-symbolic AI, LNNs serve as a bridge between the symbolic and neural components, allowing for a more seamless integration of both reasoning methods.

A closer look into the history of combining symbolic AI with deep learning

Common symbolic AI algorithms include expert systems, logic programming, semantic networks, Bayesian networks and fuzzy logic. These algorithms are used for knowledge representation, reasoning, planning and decision-making. They work well for applications with well-defined workflows, but struggle when apps are trying to make sense of edge cases. On the other hand, Neural Networks are a type of machine learning inspired by the structure and function of the human brain. Neural networks use a vast network of interconnected nodes, called artificial neurons, to learn patterns in data and make predictions.

Henry Kautz,[19] Francesca Rossi,[81] and Bart Selman[82] have also argued for a synthesis. Their arguments are based on a need to address the two kinds of thinking discussed in Daniel Kahneman’s book, Thinking, Fast and Slow. System 1 is the kind used Chat GPT for pattern recognition while System 2 is far better suited for planning, deduction, and deliberative thinking. In this view, deep learning best models the first kind of thinking while symbolic reasoning best models the second kind and both are needed.

To mitigate this, ethical considerations must be integrated into the design phase, along with clear guidelines and principles for development and deployment [117]. Establishing clear ethical guidelines and principles for developing and deploying Neuro-Symbolic AI in military applications can guide responsible decision-making [118]. Integrating ethics into the design phase mitigates potential negative consequences [117]. Additionally, implementing robust monitoring and evaluation mechanisms for AI systems during deployment is crucial for identifying and addressing potential biases or unintended outcomes [119]. Integrating symbolic reasoning with neural networks can enhance the adaptability and reasoning capabilities of robots [76]. A robot that uses symbolic reasoning can efficiently plan its route through an environment more effectively and adaptively than a robot relying on learning from data [77].

More advanced knowledge-based systems, such as Soar can also perform meta-level reasoning, that is reasoning about their own reasoning in terms of deciding how to solve problems and monitoring the success of problem-solving strategies. The widespread adoption of AI in warfare poses a significant challenge given its potential for unforeseen consequences and existential threats in the long-term [128, 129]. Power dynamics among nations may shift dramatically as they leverage AI, potentially leading to an arms race and asymmetric conflicts [130]. Moreover, unforeseen consequences like loss of control, escalation, and existential threats demand responsible development and international cooperation to mitigate the risks before they become realities [130, 129]. To address the long-term risks and existential threats posed by AI in warfare  [130, 129], fostering international cooperation on developing preventive measures to mitigate loss of control and escalation is crucial. As shown in Figure 5, the learning cycle of a Neuro-Symbolic AI system involves the integration of neural and symbolic components in a coherent and iterative process.

Neural networks excel at learning complex patterns from data, but they often lack explicit knowledge representation and logical reasoning capabilities [21, 39]. Symbolic reasoning techniques, on the other hand, are well-suited for tasks involving structured knowledge and logic-based reasoning, but they can struggle https://chat.openai.com/ with data-driven learning and generalization [17]. While symbolic reasoning handles structured knowledge and logic-based reasoning, it may face challenges when dealing with large and complex problems. In contrast, neural networks efficiently learn from extensive datasets and recognize complex patterns [40, 39].

symbolic ai vs neural networks

She has authored 3 technical books and published several research papers in reputed international journals. Limitations were discovered in using simple first-order logic to reason about dynamic domains. Problems were discovered both with regards to enumerating the preconditions for an action to succeed and in providing axioms for what did not change after an action was performed. Early work covered both applications of formal reasoning emphasizing first-order logic, along with attempts to handle common-sense reasoning in a less formal manner. We note that this was the state at the time and the situation has changed quite considerably in the recent years, with a number of modern NSI approaches dealing with the problem quite properly now.

The General Problem Solver (GPS) cast planning as problem-solving used means-ends analysis to create plans. Graphplan takes a least-commitment approach to planning, rather than sequentially choosing actions from an initial state, working forwards, or a goal state if working backwards. Satplan is an approach to planning where a planning problem is reduced to a Boolean satisfiability problem. Marvin Minsky first proposed frames as a way of interpreting common visual situations, such as an office, and Roger Schank extended this idea to scripts for common routines, such as dining out. Cyc has attempted to capture useful common-sense knowledge and has „micro-theories” to handle particular kinds of domain-specific reasoning. Research in neuro-symbolic AI has a very long tradition, and we refer the interested reader to overview works such as Refs [1,3] that were written before the most recent developments.

We further explore its potential to solve complex tasks in various domains, in addition to its applications in military contexts. Through this exploration, we address ethical, strategic, and technical considerations crucial to the development and deployment of Neuro-Symbolic AI in military and civilian applications. You can foun additiona information about ai customer service and artificial intelligence and NLP. Contributing to the growing body of research, this study represents a comprehensive exploration of the extensive possibilities offered by Neuro-Symbolic AI.

Neural networks are good at dealing with complex and unstructured data, such as images and speech. They can learn to perform tasks such as image recognition and natural language processing with high accuracy. The deep learning hope—seemingly grounded not so much in science, but in a sort of historical grudge—is that intelligent behavior will emerge purely from the confluence of massive data and deep learning.

We’ve relied on the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. Specifically, we wanted to combine the learning representations that neural networks create with the compositionality of symbol-like entities, represented by high-dimensional and distributed vectors. The idea is to guide a neural network to represent unrelated objects with dissimilar high-dimensional vectors. One promising approach towards this more general AI is in combining neural networks with symbolic AI.

symbolic ai vs neural networks

This approach aims to map neural embeddings, which are distributed numerical representations learned by neural networks, to symbolic entities such as predicates, logical symbols, or rules. This makes the symbolic representations easier for humans to understand and can be used for tasks that involve logical reasoning [48]. Over the next few decades, research dollars flowed into symbolic methods used in expert systems, knowledge representation, game playing and logical reasoning. However, interest in all AI faded in the late 1980s as AI hype failed to translate into meaningful business value. Symbolic AI emerged again in the mid-1990s with innovations in machine learning techniques that could automate the training of symbolic systems, such as hidden Markov models, Bayesian networks, fuzzy logic and decision tree learning.

Computer Science > Artificial Intelligence

Expert knowledge in military command and control can be used to design advanced AI systems that facilitate effective communication and coordination among different units, enhancing overall operational efficiency. AI techniques play a crucial role in improving communication and coordination among military units [104]. By providing real-time data, enhancing situational awareness, and streamlining decision-making processes [104, 105], these techniques facilitate smoother information flow and faster decision-making during critical moments [105]. Driven heavily by the empirical success, DL then largely moved away from the original biological brain-inspired models of perceptual intelligence to “whatever works in practice” kind of engineering approach. In essence, the concept evolved into a very generic methodology of using gradient descent to optimize parameters of almost arbitrary nested functions, for which many like to rebrand the field yet again as differentiable programming.

Several factors contribute to this complexity, including the dynamic and unpredictable nature of warfare, uncertainty and incomplete information, and adaptability to changing environments [124]. To address these challenges requires developing dynamic models that can dynamically adjust to evolving information and scenarios, effectively manage uncertainty and incompleteness in data, and integrate knowledge from multiple disciplines seamlessly. Furthermore, integrating knowledge from multiple disciplines is crucial for the robustness and accuracy of symbolic representations [124]. Determining the appropriate level of human control in Neuro-Symbolic AI-driven LAWS poses challenges [123]. Establishing responsibility and accountability for actions taken by autonomous systems becomes complex, especially in situations requiring human judgment, like navigating ethical dilemmas or exceeding AI capabilities [123]. It is, therefore, important to ensure that humans maintain meaningful control over autonomous weapons systems by integrating human-in-the-loop decision-making processes [124, 93].

In our paper “Robust High-dimensional Memory-augmented Neural Networks” published in Nature Communications,1 we present a new idea linked to neuro-symbolic AI, based on vector-symbolic architectures. Good-Old-Fashioned Artificial Intelligence (GOFAI) is more like a euphemism for Symbolic AI is characterized by an exclusive focus on symbolic reasoning and logic. However, the approach soon lost fizzle since the researchers leveraging the GOFAI approach were tackling the “Strong AI” problem, the problem of constructing autonomous intelligent software as intelligent as a human. The automated theorem provers discussed below can prove theorems in first-order logic. Horn clause logic is more restricted than first-order logic and is used in logic programming languages such as Prolog.

The paper addresses ethical, strategic, and technical considerations related to the development and deployment of Neuro-Symbolic AI in the military. It identifies key challenges and proposes promising directions for future development, emphasizing responsible deployment in areas such as logistics optimization, situational awareness enhancement, and dynamic decision-making. In Table 2, we present a comparison between our comprehensive exploration of Neuro-Symbolic AI for military applications and existing research by highlighting key distinctions and contributions in the context of our work. Symbolic AI laid the foundation for much of modern artificial intelligence by providing structured ways to represent knowledge and logical reasoning. While its limitations in scalability and adaptability have led to the rise of other AI approaches, its principles still play a role in fields where structured knowledge and clear, interpretable rules are crucial. The evolution of AI may see an increased interest in combining symbolic AI with data-driven methods to create systems that are both powerful and explainable.

Mimicking the brain: Deep learning meets vector-symbolic AI – IBM Research

Mimicking the brain: Deep learning meets vector-symbolic AI.

Posted: Thu, 29 Apr 2021 07:00:00 GMT [source]

Integrating both approaches, known as neuro-symbolic AI, can provide the best of both worlds, combining the strengths of symbolic AI and Neural Networks to form a hybrid architecture capable of performing a wider range of tasks. While we cannot give the whole neuro-symbolic AI field due recognition in a brief overview, we have attempted to identify the major current research directions based on our survey of recent literature, and we present them below. Literature references within this text are limited to general overview articles, but a supplementary online document referenced at the end contains references to concrete examples from the recent literature. Examples for historic overview works that provide a perspective on the field, including cognitive science aspects, prior to the recent acceleration in activity, are Refs [1,3]. Military operations must adhere to international laws and ethical guidelines [158, 159].

An infinite number of pathological conditions can be imagined, e.g., a banana in a tailpipe could prevent a car from operating correctly. Qualitative simulation, such as Benjamin Kuipers’s QSIM,[90] approximates human reasoning about naive physics, such as what happens when we heat a liquid in a pot on the stove. We expect it to heat and possibly boil over, even though we may not know its temperature, its boiling point, or other details, such as atmospheric pressure. Time periods and titles are drawn from Henry Kautz’s 2020 AAAI Robert S. Engelmore Memorial Lecture[19] and the longer Wikipedia article on the History of AI, with dates and titles differing slightly for increased clarity.

But together, they achieve impressive synergies not possible with either paradigm alone. Another way the two AI paradigms can be combined is by using neural networks to help prioritize how symbolic programs organize and search through multiple facts related to a question. For example, if an AI is trying to decide if a given statement is true, a symbolic algorithm needs to consider whether thousands of combinations of facts are relevant. „This is a prime reason why language is not wholly solved by current deep learning systems,” Seddiqi said.

Addressing these challenges requires legal frameworks that clearly define accountability for actions taken by autonomous systems, along with mechanisms to assign responsibility appropriately, whether to manufacturers, programmers, or military commanders [122, 120]. The integration of AI into lethal weapons raises several ethical and moral questions [111], such as discrimination, proportionality, and dehumanization, regarding the moral implications of delegating critical decisions to machines [112, 113]. One potential approach to address this challenge is to develop comprehensive ethical guidelines and standards for the deployment of Neuro-Symbolic AI in military applications. These guidelines should encompass principles of discrimination, proportionality, and accountability [117, 118]. Furthermore, implementing robust monitoring and evaluation mechanisms is crucial to identify and address potential biases or unintended outcomes during AI system deployment [119]. The RAID program is another example of Neuro-Symbolic AI used in military applications, as discussed in [38].

In symbolic AI, discourse representation theory and first-order logic have been used to represent sentence meanings. Latent semantic analysis (LSA) and explicit semantic analysis also provided vector representations of documents. In the latter case, vector components are interpretable as concepts named by Wikipedia articles. The deployment of Neuro-Symbolic AI in military operations raises significant ethical concerns related to autonomous decision-making [90]. These systems, particularly neural networks, exhibit complex and non-linear behavior that can lead to unforeseen consequences, challenging control, and foreseeability.

  • For example, DeepMind’s AlphaGo used symbolic techniques to improve the representation of game layouts, process them with neural networks and then analyze the results with symbolic techniques.
  • Due to these problems, most of the symbolic AI approaches remained in their elegant theoretical forms, and never really saw any larger practical adoption in applications (as compared to what we see today).
  • In the Symbolic approach, AI applications process strings of characters that represent real-world entities or concepts.
  • Once they are built, symbolic methods tend to be faster and more efficient than neural techniques.

They do so by effectively reflecting the variations in the input data structures into variations in the structure of the neural model itself, constrained by some shared parameterization (symmetry) scheme reflecting the respective model prior. It has now been argued by many that a combination of deep learning with the high-level reasoning capabilities present in the symbolic, logic-based approaches is necessary to progress towards more general AI systems [9,11,12]. With this paradigm shift, many variants of the neural networks from the ’80s and ’90s have been rediscovered or newly introduced. Benefiting from the substantial increase in the parallel processing power of modern GPUs, and the ever-increasing amount of available data, deep learning has been steadily paving its way to completely dominate the (perceptual) ML. For example, AI models might benefit from combining more structural information across various levels of abstraction, such as transforming a raw invoice document into information about purchasers, products and payment terms. An internet of things stream could similarly benefit from translating raw time-series data into relevant events, performance analysis data, or wear and tear.

symbolic ai vs neural networks

This includes developing override mechanisms to allow humans to intervene and prevent unlawful or unethical actions by autonomous systems [125]. Modern autonomous weapon systems raise questions about their impact on various laws, including international human rights law and the right to life. This is particularly evident in policing, crowd control, border security, and military applications [81]. These systems may also pose challenges in complying with the rules of war, which require the differentiation of combatants from civilians and the avoidance of unnecessary suffering [110, 87].

The reliability of autonomous weapons systems is crucial in minimizing the risk of unintended consequences [143, 10]. This involves ensuring the reliability of sensor data, communication systems, and decision-making algorithms. Faulty sensors or misinterpretations can lead to targeting the wrong individuals or objects, leading to civilian casualties, and posing legal and ethical challenges [144]. Sensors and their communication channels are vulnerable to cyberattacks that can manipulate data, causing malfunctions or deliberate targeting of unintended entities.

symbolic ai vs neural networks

By symbolic we mean approaches that rely on the explicit representation of knowledge using formal languages—including formal logic—and the manipulation of language items (‘symbols’) by algorithms to achieve a goal. Incorporating expert knowledge into the underlying AI models can significantly enhance their ability to reason about complex problems and align their outputs with human understanding and expertise [50]. This involves domain-specific knowledge, rules, and insights provided by human experts in a particular field. In the symbolic component of a Neuro-Symbolic AI system, expert knowledge is often encoded in the form of symbolic rules and logical expressions that capture the structured information and reasoning processes relevant to the application domain [50]. Recent advancements have introduced several innovative techniques for incorporating expert knowledge into AI models. Retrieval Augmented Generation (RAG) leverages retrieval mechanisms to enhance the generation capabilities of models by integrating external knowledge sources [51].

For example, who should be held responsible if an autonomous weapons system kills an innocent civilian? Is it the manufacturer, the programmer, or the military commander who ordered the attack? The work in [122] provides a comprehensive analysis of the legality of using autonomous weapons systems under international law. Additionally, it examines the challenges of holding individuals accountable for violations of international humanitarian law involving autonomous weapons systems [122].

However, in the meantime, a new stream of neural architectures based on dynamic computational graphs became popular in modern deep learning to tackle structured data in the (non-propositional) form of various sequences, sets, and trees. Most recently, an extension to arbitrary (irregular) graphs then became extremely popular as Graph Neural Networks (GNNs). From a more practical perspective, a number of successful NSI works then utilized various forms of propositionalisation (and “tensorization”) to turn the relational problems into the convenient numeric representations to begin with [24]. However, there is a principled issue with such approaches based on fixed-size numeric vector (or tensor) representations in that these are inherently insufficient to capture the unbound structures of relational logic reasoning. Consequently, all these methods are merely approximations of the true underlying relational semantics.

NLAWS are an evolving class of autonomous weapons systems designed for military and security purposes [92]. The primary goal of these systems is incapacitating or deterring adversaries without causing significant lethality. These systems employ non-lethal means, such as disabling electronics, inducing temporary incapacitation, or utilizing other methods to achieve their objectives. Examples of NLAWS include non-lethal weapons such as rubber bullets, tear gas, electromagnetic jammers, etc.

More Stories

Copyright © All rights reserved. | Newsphere by AF themes.