Close Menu
VernoNews
  • Home
  • World
  • National
  • Science
  • Business
  • Health
  • Education
  • Lifestyle
  • Entertainment
  • Sports
  • Technology
  • Gossip
Trending

Millennials are driving a sports activities tourism growth — spending huge to do it

November 29, 2025

Off-price retail rival sees main shift after Large Tons chapter

November 29, 2025

RHOC’s Kelly Dodd Addresses Leaked Voicemail Insulting Daughter Jolie

November 29, 2025

14-Yr-Previous Feminine Charged After New child Discovered In Tote Bag

November 29, 2025

Solawave Sale: Save 35% On This Viral Pink Gentle Gadget

November 29, 2025

California rural hospitals face danger of closure, together with one in Willows

November 29, 2025

Might Symbolic AI Unlock Human-like Intelligence?

November 29, 2025
Facebook X (Twitter) Instagram
VernoNews
  • Home
  • World
  • National
  • Science
  • Business
  • Health
  • Education
  • Lifestyle
  • Entertainment
  • Sports
  • Technology
  • Gossip
VernoNews
Home»Science»Might Symbolic AI Unlock Human-like Intelligence?
Science

Might Symbolic AI Unlock Human-like Intelligence?

VernoNewsBy VernoNewsNovember 29, 2025No Comments13 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Reddit WhatsApp Email
Might Symbolic AI Unlock Human-like Intelligence?
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


Will computer systems ever match or surpass human-level intelligence — and, if that’s the case, how? When the Affiliation for the Development of Synthetic Intelligence (AAAI), primarily based in Washington DC, requested its members earlier this yr whether or not neural networks — the present star of artificial-intelligence programs — alone can be sufficient to hit this aim, the overwhelming majority stated no. As an alternative, most stated, a heavy dose of an older type of AI can be wanted to get these programs as much as par: symbolic AI.

Typically known as ‘good old school AI’, symbolic AI relies on formal guidelines and an encoding of the logical relationships between ideas. Arithmetic is symbolic, for instance, as are ‘if–then’ statements and laptop coding languages comparable to Python, together with circulation charts or Venn diagrams that map how, say, cats, mammals and animals are conceptually associated. Many years in the past, symbolic programs had been an early front-runner within the AI effort. Nonetheless, within the early 2010s, they had been vastly outpaced by more-flexible neural networks. These machine-learning fashions excel at studying from huge quantities of knowledge, and underlie giant language fashions (LLMs), in addition to chatbots comparable to ChatGPT.

Now, nevertheless, the computer-science group is pushing arduous for a greater and bolder melding of the previous and the brand new. ‘Neurosymbolic AI’ has grow to be the most well liked buzzword on the town. Brandon Colelough, a pc scientist on the College of Maryland in School Park, has charted the meteoric rise of the idea in tutorial papers. These reveal a spike of curiosity in neurosymbolic AI that began in round 2021 and reveals no signal of slowing down.


On supporting science journalism

If you happen to’re having fun with this text, think about supporting our award-winning journalism by subscribing. By buying a subscription you might be serving to to make sure the way forward for impactful tales in regards to the discoveries and concepts shaping our world at this time.


Loads of researchers are heralding the pattern as an escape from what they see as an unhealthy monopoly of neural networks in AI analysis, and anticipate the shift to ship smarter and extra dependable AI.

A greater melding of those two methods might result in synthetic normal intelligence (AGI): AI that may motive and generalize its information from one state of affairs to a different in addition to people do. It may additionally be helpful for high-risk purposes, comparable to navy or medical decision-making, says Colelough. As a result of symbolic AI is clear and comprehensible to people, he says, it doesn’t undergo from the ‘black field’ syndrome that may make neural networks arduous to belief.

There are already good examples of neurosymbolic AI, together with Google DeepMind’s AlphaGeometry, a system reported final yr that may reliably remedy maths Olympiad issues — questions aimed toward proficient secondary-school college students. However understanding how greatest to mix neural networks and symbolic AI into an all-purpose system is a formidable problem.

“You’re actually architecting this sort of two-headed beast,” says laptop scientist William Regli, additionally on the College of Maryland.

Disagreement

In 2019, laptop scientist Richard Sutton posted a brief essay entitled ‘The bitter lesson’ on his weblog (see go.nature.com/4paxykf). In it, he argued that, for the reason that Nineteen Fifties, individuals have repeatedly assumed that one of the best ways to make clever computer systems is to feed them with all of the insights that people have arrived at in regards to the guidelines of the world, in fields from physics to social behaviour. The bitter tablet to swallow, wrote Sutton, is that point and time once more, symbolic strategies have been outdone by programs that use a ton of uncooked information and scaled-up computational energy to leverage ‘search and studying’. Early chess-playing computer systems, for instance, that had been educated on human-devised methods had been outperformed by people who had been merely fed numerous sport information.

This lesson has been extensively quoted by proponents of neural networks to assist the concept making these programs ever-bigger is one of the best path to AGI. However many researchers argue that the essay overstates its case and downplays the essential half that symbolic programs can and do play in AI. For instance, one of the best chess program at this time, Stockfish, pairs a neural community with a symbolic tree of allowable strikes.

Neural nets and symbolic algorithms each have professionals and cons. Neural networks are made up of layers of nodes with weighted connections which might be adjusted throughout coaching to acknowledge patterns and be taught from information. They’re quick and artistic, however they’re additionally sure to make issues up and might’t reliably reply questions past the scope of their coaching information.

Symbolic programs, in the meantime, wrestle to embody ‘messy’ ideas, comparable to human language, that contain huge rule databases which might be tough to construct and sluggish to go looking. However their workings are clear, and they’re good at reasoning, utilizing logic to use their normal information to contemporary conditions.

When put to make use of in the actual world, neural networks that lack symbolic information make traditional errors: picture mills would possibly draw individuals with six fingers on every hand as a result of they haven’t learnt the overall idea that palms usually have 5; video mills wrestle to make a ball bounce round a scene as a result of they haven’t learnt that gravity pulls issues downwards. Some researchers blame such errors on an absence of knowledge or computing energy, however others say that the errors illustrate neural networks’ basic incapacity to generalize information and motive logically.

Many argue that including symbolism to neural nets is perhaps one of the best — even the one — solution to inject logical reasoning into AI. The worldwide expertise agency IBM, for instance, is backing neurosymbolic strategies as a path to AGI. However others stay sceptical: Yann LeCun, one of many fathers of contemporary AI and chief AI scientist at tech big Meta, has stated that neurosymbolic approaches are “incompatible” with neural-network studying.

Sutton, who’s on the College of Alberta in Edmonton, Canada, and gained the 2024 Turing prize, the equal of the Nobel prize for laptop science, holds agency to his unique argument: “The bitter lesson nonetheless applies to at this time’s AI,” he instructed Nature. This means, he says, that “including a symbolic, extra manually crafted aspect might be a mistake.”

Gary Marcus, an AI entrepreneur, author and cognitive scientist primarily based in Vancouver, Canada, and one of the vital vocal advocates of neurosymbolic AI, tends to border this distinction of opinions as a philosophical battle that’s now being settled in his favour.

Others, comparable to roboticist Leslie Kaelbling on the Massachusetts Institute of Expertise (MIT) in Cambridge, say that arguments over which view is correct are a distraction, and that folks ought to simply get on with no matter works. “I’m a magpie. I’ll do something that makes my robots higher.”

Combine and match

Past the truth that neurosymbolic AI goals to meld the advantages of neural nets with the advantages of symbolism, its definition is blurry. Neurosymbolic AI encompasses “a really giant universe,” says Marcus, “of which we’ve explored solely a tiny bit.”

There are a lot of broad approaches, which individuals have tried to categorize in varied methods. One choice highlighted by many is the usage of symbolic strategies to enhance neural nets. AlphaGeometry is arguably one of the vital subtle examples of this technique: it trains a neural web on an artificial information set of maths issues produced utilizing a symbolic laptop language, making the options simpler to verify and guaranteeing fewer errors. It combines the 2 elegantly, says Colelough. In one other instance, ‘logic tensor networks’ present a solution to encode symbolic logic for neural networks. Statements will be assigned a fuzzy-truth worth: a quantity someplace between 1 (true) and 0 (false). This gives a framework of guidelines to assist the system motive.

One other broad method does what some would say is the reverse, utilizing neural nets to finesse symbolic algorithms. One downside with symbolic information databases is that they’re typically so giant that they take a really very long time to go looking: the ‘tree’ of all doable strikes in a sport of Go, for instance, comprises about 10170 positions, which is unfeasibly giant to crunch via. Neural networks will be educated to foretell probably the most promising subset of strikes, permitting the system to chop down how a lot of the ‘tree’ it has to go looking, and thus rushing up the time it takes to choose one of the best transfer. That’s what Google’s AlphaGo did when it famously outperformed a Go grandmaster.

An alternate concept is to insert symbolics into the center of an LLM’s workflow, in the identical approach as consulting a calculator would possibly assist individual to unravel a maths puzzle. Utilizing rules-based programs throughout essential reasoning steps may help to maintain LLMs from going off-track, many argue. Initiatives together with the Program-Aided Language (PAL) mannequin, for instance, use an LLM to transform natural-language duties into Python code, use that symbolic code to unravel the issue, after which interpret that resolution again into pure language with an LLM.

Jiayuan Mao, an AI researcher who has simply accomplished her PhD at MIT with Kaelbling and is on her solution to the College of Pennsylvania in Philadelphia, has had success in utilizing neurosymbolic AI to make robotic coaching extra environment friendly. Her technique is to make use of a neural community to acknowledge objects (comparable to a pink rubber ball or a inexperienced glass dice) in a visible area after which use a symbolic algorithm to motive via relational questions on these objects (comparable to ‘is the rubber object behind the inexperienced object?’). A pure neural community would wish 700,000 examples in its coaching information to realize 99% accuracy on this activity, she says. However by including symbolic strategies, she wants simply 10% of that quantity. “Even if you happen to use 1%, you possibly can nonetheless get 92% accuracy, which is kind of spectacular,” she says. The same neurosymbolic system she created trounced a neural-network-based system at guiding a robotic that encountered unfamiliar objects whereas washing dishes or making tea.

Misplaced in translation

One of many large challenges for symbolic AI is the best way to encode typically slippery human information inside a language of logic and guidelines. One of many earliest makes an attempt was a challenge known as Cyc, began by laptop scientist Doug Lenat in 1984 and later overseen by his AI firm Cycorp, primarily based in Austin, Texas. The intent was to explicitly articulate common sense information and guidelines of thumb, comparable to ‘a daughter is a toddler’, ‘individuals love their youngsters’ and ‘seeing somebody you like makes you smile’. The challenge’s language, CycL, makes use of symbols (for logical operators comparable to IF, AND, OR and NOT) to precise logical relationships in order that an inference engine can simply draw conclusions, comparable to ‘seeing your youngster would make you smile’.

Cyc, which now holds greater than 25 million axioms, has been utilized in a wide range of AI efforts and has impressed different initiatives, comparable to Google’s Information Graph, which holds greater than 500 billion information. At present, ‘information engineers’ use an identical technique to assemble human-generated information and relationships, construct specialised databases and combine them with AI.

Symbolic databases may help an AI to generalize information from one state of affairs and apply it in one other, says Kaelbling, which is a robust solution to make reasoning extra environment friendly. However there’s a trade-off in accuracy when coping with topics for which there are various exceptions to the ‘guidelines’ — not all individuals love their youngsters, for instance, and seeing one thing you like doesn’t all the time make you smile. Symbolics must be included solely when it’s useful to take action, she says. “Cyc was making an attempt to show frequent sense into math. That’s nearly certainly a foul concept,” Kaelbling says.

In 2023, Marcus posted a paper with Lenat laying out what LLMs can be taught from Cyc. As a part of that work, the duo requested GPT-3, an early LLM of the kind that underpins ChatGPT, to write down CycL statements that encode the logical relationships within the sentence “Did you contact a blue object positioned within the capital of France on September twenty fifth, 2022?” The response “at first amazed the Cyc staff”, they report, as a result of it generated what seemed to be the best type of statements in the best type of language. However on nearer inspection, GPT-3 made many essential errors, they write, comparable to concluding that “the factor that’s touching the blue object is the date”.

“It appears to be like prefer it’s good, it appears to be like prefer it ought to work, however it’s completely rubbish,” says Colelough. This reveals that it’s pointless to easily ram collectively a symbolic engine and a neural web, he says. “Then you definately would possibly as effectively simply not have the neurosymbolic system.”

What’s wanted, Colelough says, is much more analysis on AI ‘metacognition’ — how AI screens and conducts its personal considering. That may allow AI ‘conductors’ to supervise a extra subtle integration of the 2 paradigms, slightly than having completely different engines merely take turns. Colelough says AlphaGeometry does this effectively, however in a restricted context. If a versatile conductor that works for any area of information may very well be developed, “that might be AGI for me”, Colelough says.

There’s much more work to do. Contemporary {hardware} and chip architectures is perhaps wanted to run neurosymbolic AI effectively. In time, different varieties of AI — possibly primarily based on neural networks, symbolic AI, each or neither — would possibly grow to be extra thrilling, comparable to quantum AI, a fledgling area that seeks to take advantage of the properties of the quantum world to enhance AI.

For Mao, the final word aim is to leverage neural networks’ studying talents to create guidelines, classes and paths of reasoning that people aren’t but conscious of. “The hope is that finally we are able to have programs that additionally invent their very own symbolic illustration and symbolic algorithms, in order that they will actually transcend what a human is aware of,” she says. That is perhaps like a pc discovering an as-yet-unknown mathematical or bodily idea — maybe analogous to π or the property of mass — after which encoding the brand new idea to assist to increase information. “We have to research how computer systems can educate people, not how people can educate machines.”

This text is reproduced with permission and was first printed on November 25, 2025.

Avatar photo
VernoNews

Related Posts

Our verdict on sci-fi novel Each Model of You: We (principally) beloved it

November 29, 2025

Physics’ Strangest Prediction: Researchers Suggest Option to Lastly “See” the Heat of the Vacuum

November 29, 2025

Why the Canon EOS R6 II is best than the Canon EOS R8 for newbie wildlife images

November 29, 2025

Comments are closed.

Don't Miss
World

Millennials are driving a sports activities tourism growth — spending huge to do it

By VernoNewsNovember 29, 20250

Vince Nguyen, a 39-year-old entrepreneur primarily based in Vietnam, spends a number of hundred {dollars}…

Off-price retail rival sees main shift after Large Tons chapter

November 29, 2025

RHOC’s Kelly Dodd Addresses Leaked Voicemail Insulting Daughter Jolie

November 29, 2025

14-Yr-Previous Feminine Charged After New child Discovered In Tote Bag

November 29, 2025

Solawave Sale: Save 35% On This Viral Pink Gentle Gadget

November 29, 2025

California rural hospitals face danger of closure, together with one in Willows

November 29, 2025

Might Symbolic AI Unlock Human-like Intelligence?

November 29, 2025
About Us
About Us

VernoNews delivers fast, fearless coverage of the stories that matter — from breaking news and politics to pop culture and tech. Stay informed, stay sharp, stay ahead with VernoNews.

Our Picks

Millennials are driving a sports activities tourism growth — spending huge to do it

November 29, 2025

Off-price retail rival sees main shift after Large Tons chapter

November 29, 2025

RHOC’s Kelly Dodd Addresses Leaked Voicemail Insulting Daughter Jolie

November 29, 2025
Trending

14-Yr-Previous Feminine Charged After New child Discovered In Tote Bag

November 29, 2025

Solawave Sale: Save 35% On This Viral Pink Gentle Gadget

November 29, 2025

California rural hospitals face danger of closure, together with one in Willows

November 29, 2025
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © VernoNews. All rights reserved

Type above and press Enter to search. Press Esc to cancel.