I recently installed a bulletin board beside my writing desk at home. At the moment, there is only inspirational image pinned upon it:

This post is about the second bit of advice: Know the problem.
I recently read a lovely essay all about understanding the problem as an alternative to trying to research the answer. That essay is called, When facing a complicated problem, don’t try to solve it, try to understand it.
Here’s an excerpt.
Faced with endless options, as you are when planning a garden, or writing an essay, or shaping a life, how do you reliably figure out what the right solution is?
I was somewhat happy conceding it was a mystery. Sometimes things turn out well and sometimes not. But Johanna has a lot of what Laura Deming calls the rage of research (“But why????? my brain will scream.”). She absolutely cannot tolerate it when she doesn’t understand something.
So when Rebecka was a baby and only wanted to sleep if her mother was next to her in bed, Johanna filled the bed with books, work notes, and lectures of designers she admired. Studying their processes, she learned many useful things. But it was all a blur of small decisions. She couldn’t discern the underlying pattern of why they chose one thing over another, or how they reached their conclusions.
Until one day, she found a skeleton key. She figured out what they were doing. This insight—which took form during a close reading of the work of the design theoretician Christopher Alexander—was so basic and general that it also helped us figure out how to write better essays and deal with all of the complex, messy problems that we keep bringing on ourselves by acting on our curiosities and convictions.
Compressed to a single paragraph, the core idea was this:
When faced with a difficult problem, don’t try to solve it. Instead, make sure you understand it. If you understand it properly, the solution will be obvious.
It is the first day of September. Tomorrow myself and my kids are going back to school.
The university is an institution that is made up of many disciplines that employ their own approaches to understanding problems. We go to school to learn these approaches; we don’t go to learn answers. One does not go to art school to find answers, but approaches to constraints. One takes courses in literature to learn and apply theory. The social sciences and the sciences have their own research methods.
I don’t know why this particular understanding is not more widely shared as the raison d’etre for going to or supporting higher education: you go to school to better understand problems. Instead, you are more likely to hear someone say that university is a place where you learn how to learn. I think this a poor choice of words because it suggests that those who don’t attend higher education don’t have their own ways of knowing.
At university, you learn to learn with theories or experiments, or through forms of practice that are specialized and take time to learn and as such, they are likely new to you.
There are tests, papers, and assignments because we learn by doing.
[This is the part when we start talking about chat-gpt]
Large language models are marketed as artificial intelligence but these systems are really more like search engines of expressed knowledge (scraped from books and the internet) and tacit knowledge (scraped from Reddit).
While these systems allow for customization, they do not learn.

This is what learning (and an environment designed for learning) looks like:
From Ted Chiang’s Why A.I. Isn’t Going to Make Art:
In 2019, researchers conducted an experiment in which they taught rats how to drive. They put the rats in little plastic containers with three copper-wire bars; when the mice put their paws on one of these bars, the container would either go forward, or turn left or turn right. The rats could see a plate of food on the other side of the room and tried to get their vehicles to go toward it. The researchers trained the rats for five minutes at a time, and after twenty-four practice sessions, the rats had become proficient at driving. Twenty-four trials were enough to master a task that no rat had likely ever encountered before in the evolutionary history of the species. I think that’s a good demonstration of intelligence.
Now consider the current A.I. programs that are widely acclaimed for their performance. AlphaZero, a program developed by Google’s DeepMind, plays chess better than any human player, but during its training it played forty-four million games, far more than any human can play in a lifetime. For it to master a new game, it will have to undergo a similarly enormous amount of training. By Chollet’s definition, programs like AlphaZero are highly skilled, but they aren’t particularly intelligent, because they aren’t efficient at gaining new skills. It is currently impossible to write a computer program capable of learning even a simple task in only twenty-four trials, if the programmer is not given information about the task beforehand.

I’m currently reading The Legal Analyst: A Toolkit for Thinking About the Law following a recommendation from David Colarusso. This is a book of ways of thinking and possible frameworks that one could use to approach or understand a legal problem. There are no answers in the book and so far there has been no mention of doing legal research to find them.
I’m going to end this post with this reminder (from Maggie Appleton)
The foundation labs who control the models and default interfaces aren’t prioritising this. At least as far as I can tell. They’re focused on autonomous, agentic workflows like the recent “ Deep Research ” hype. Or developing “ reasoning models ” where the models themselves are trained to be the critical thinkers. This focuses entirely on getting models to think for you, rather than helping you become a better thinker.
What’s the alternative?
Do one thing at a time.
Know the problem.
Learn to listen.
Learn to ask questions.
Distinguish sense from nonsense.
Accept change as inevitable.
Admit mistakes.
Say it simple.
Be calm.
Smile.