The Laws of Thought: The Quest for a Mathematical Theory of the Mind

The Laws of Thought: The Quest for a Mathematical Theory of Mind
Tom Griffiths,
Henry Holt and Company (February 2026)

Finally, after three centuries, the previously untold “story” of how to understand our “internal world”

Based on the preliminary research of a non-scientist, this is is what I have learned about mathematical theories of the mind: “It (or Computational Theory of Mind, CTM) posits that human cognition is a form of computation, where mental states are represented by symbols and mental processes with algorithms acting upon them. It models the brain as an information-processing system using, for example, probability theory, symbolic logic, or neural networks to simulate decision-making and perception.

“Key aspects of a mathematical theory of the mind include:

o Computational Theory of Mind (CTM): This approach suggests that thinking is similar to computer processing, where inputs are transformed into outputs via step-by-step algorithms, often using the Routledge Encyclopedia of Philosophy.

o Representational Structures: Mental states are viewed as configurations in a formal system, allowing for logic-based analysis of reasoning.

o Probabilistic Models: Cognitive functions, especially in uncertainty, are modeled using Bayesian statistics to explain how the brain infers the best explanation for sensory data.

o Neural Networks and Connectionism: These models, a rival or complement to classical CTM, simulate mental processes using interconnected nodes inspired by neural structure.

o Mathematical Modeling of Consciousness: Some theories attempt to quantify consciousness using information theory, such as Integrated Information Theory (IIT), which calculates a system’s ability to integrate information (measured as \(\Phi \)). This field bridges neuroscience and psychology, attempting to create rigorous models of how physical brain activity gives rise to mental phenomena.”

What you have in The Laws of Thought is a comprehensive and inclusive survey of more than three centuries of a process — best viewed as a quest — to develop a mathematical theory of the mind. Tom Griffiths focuses on three theoretical frameworks for making mathematical models of cognition: rules and symbols, neural networks, and Baysian models. “They offer complementary perspectives on the mind. Each framework highlights a different kind of mathematics

These are among the passages of greatest interest and value to me, also listed to suggest the scope of Griffiths’s coverage:

o AI (Pages xiv, xvii, xviii,72-73, 101-102, 151-152, 196-197, 202-203, and 296-298)
o Neural networks (xiv-xv, xvii-xviii, 179-181, 194-197, and  237-2380
o Gottfried Wilhelm Leibniz  (xiv, 1-5, 10-11, 13-15, 98-99, 104-105, and 240-242)
o Thomas Bayes (xv, xviii, 247-249, 251-254, 265-278, and 291-292)
o George Boole (1-2, 114-18,24-26, 35-37, 105-106, 237-238, and 294-296)

o Rules and symbols (17-18, 56-57, 61-63, 72-73, and 109-111)
o Alan Turing (37-40, 40-45, and 77-100)
o Features (50-51,124-125, 135-139, 166-167, 263-266, and 278-281)
o George Miller (52-55, 91-92, and 152-153)
o Noam Chomsky (53-54, 116-117, 163-164, 214-216, 220-221, 282-285, and 288-289)

o Roger Shepard (125-136, 128-131, and 136-137)
o Amos Tversky (130-138
o Connectionism and AI (193-200)
o Algorithmic level of analysis (232-237)
o Probability Theory (237-260, 267-268, and 270-272)

With patience as well as precision, Tom Griffiths explains how the first theoretical framework for making mathematical models of cognition took the insights of logic and applies them to a wide range of cognitive problems: reasoning, categorization, problem-solving, and language.

Next, he explains how artificial neural networks combine continuous representations with powerful general-purpose methods for learning from data. Also, how artificial neural networks also power modern artificial intelligence –n from systems (e.g., AlphaGo) which can learn to play games, to systems (e.g., ChatGPT) which can carry out (mostly) informed conversations.

And Griffiths also draws on a principle that dates back to the 18th century (i.e., Bayes’ rule) when explaining how to characterize what conclusion a rational agent should reach from data under different assumptions about the biases the agent brings to the given  problem.

Tom Griffiths adds: “These three frameworks — rules and symbols, neural networks, and Bayesian Networks — offer complementary perspectives on the mind. Each framework highlights a different kind of mathematics that is ultimately going to be critical to understanding the Laws of Thought.”

Obviously, no brief commentary such as mine could possibly do full justice to the value of the information, insights, and counsel that are provided in The Laws of Thought but I hope I have at least indicated why I hold Griffiths and his work in such high regard

* * *

Here are two suggestions while you are reading this book. First, highlight key passages. Also, perhaps in a lined notebook kept near-at-hand,  record your comments, questions, and action steps (preferably with deadlines). Pay special attention to an exceptionally informative Introduction and to Tom Griffiths’s concluding thoughts in the final chapter, “Putting It All Together.” I also commend him for his annotated “Notes,” Pages 305-342.

These two simple tactics — highlighting and documenting — will expedite frequent reviews of key material later.

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.