Contents
Note: these are some musings on similarities between LLMs and human thought
The Hard Problem of LLMs
The title is a play on the hard problem of consciousness and how LLMs may approach being conscious. The problem is stated something like how do the material processes of the brain give rise to human consciousness. This is one of my favorite problems in philosophy and now is the first time I have seen something of a possible answer to it. Let me explain.
Embeddings and understanding
The weights of an LLM can be seen as representing concepts. They are embeddings into a multi-dimensional space. People found that subtracting the embedding vector for woman from the one for queen gives us a vector close to the one we would get if we subtract v(man) from v(king) [1]. This seems to indicate that there is space of concepts that we can index into. Calling this understand may be a bit premature, but let's continue.
Platonic representation
You might wonder if the embeddings of one model are the same as another. No, they're probably not. But there does seem to be a relationship still. Perhaps the relationships of the vectors in one model's embedding space, are similar to the ones in a different model's embedding space? Yes, this seems more likely. The idea is called the Platonic Representation Hypothesis [2].
Are you thinking what I'm thinking?
Could we transform the embeddings from one model to another? Yes, it looks like we can. Jha et al. (2025) introduced the first method for translating text embeddings from one vector space to another without any paired data, encoders, or predefined matches. While this has security implications, it could also lead to innovations. What if we used the vector spaces from multiple models and try to derive a more complete space, or one with more accurate vectors (such that they better represent Platonic semantic meanings)?
Is word generation thinking?
To answer that question we should define what thinking is. Google Gemini said it's "Thinking is the mental process of manipulating information to form concepts, solve problems, make decisions, and understand the world."; Grok said: "cognitive process of using one's mind to consider, reason about, or manipulate ideas..."; Others give similar definitions, but they seem a tad circular to me. "mental process of manipulating ideas" and "using one's mind to consider" are about the same to me, but don't get to the underlying mechanism. I like the idea of a train of thought better. Thinking is generating words in your mind, based on the words you thought last. The thing is, there are multiple ways of thinking. The definition I just gave applies to when we are chatting with someone, or writing something like this paragraph. If someone asks us to do arithmetic, then we may be using a different part of the brain and the definition of thinking changes somewhat. But generating words (or concepts) based on previous words, is what LLMs do, so are they thinking? Yes, according to our definition, they are. But that doesn't mean they are conscious.
Are we there yet?
If defining thinking is hard, defining consciousness is even harder. That said, LLMs are not conscious in the way we are, yet. They would need to have a continuous, though not infinite, context. They would need to selectively forget things, and be able to update their weights as they generate new information. But maybe we are more machine-like than we thought.
The Art of Debugging
Debugging is a skill you hone over the years. With time you will start to recognize problems and their solutions. But there are some principles that you can keep in mind, especially when you get stuck on a difficult bug.
You may assume the input is always in a certain range or of a certain type -- verify that it is.
You may assume a library or class does what it says on the tin -- verify that it does.
For really tricky bugs it's helpful to write down the assumptions, the inputs, other system state, and to write down a plan of attack.
- Verify your assumptions
- Verify your inputs
- Break down the problem
- Keep track of your progress
- Explain the problem to someone
- Don't give up
You may assume the input is always in a certain range or of a certain type -- verify that it is.
You may assume a library or class does what it says on the tin -- verify that it does.
For really tricky bugs it's helpful to write down the assumptions, the inputs, other system state, and to write down a plan of attack.
Test Automation
When done right, test automation can save you a lot of time. Test automation is a big topic, and here I'll just cover some observations and tips that I found useful.
- Write the tests early in the development process. That way you make use of the investment longer.
- Run the tests on a schedule.
- Review the test results frequently. Ideally you set alerts for when tests fail.
- Make the test results part of your metrics.
What tools to use?
It depends on what the system under test is. For a web application, I would highly recommend Microsoft Playwright. It supports difficult use cases such as iframes and shadow DOM. At Optum we built a system where the user would launch tests from ALM (formerly HP ALM, now managed by Opentext) by clicking a toolbar button. We hooked up a script to the button that would invoke GitHub Actions workflow to run the Playwright tests. Playwright lets you take screenshots and record videos of the test runs. If a test failed, we would create a defect in ALM and attach screenshots and videos to it. We also emailed them to the user. These tests were for the Epic EMR (which now has a web version).
This Website
This website is built with Next.js and Tailwind CSS. It is hosted on GitHub Pages. It is server side rendered and statically generated by a GitHub Actions workflow. The code is available on GitHub at hnordberg/hnordberg.github.io. To publish, I to push to the master branch. If it had been built as a collaboration, I would be using pull requests instead.