But ideas aren’t summoned from nowhere: they come from raw material, other ideas or observations about the world. Hence a two-step creative process: collect raw material, then think about it. From this process comes pattern recognition and eventually the insights that form the basis of novel ideas.
Our brains have a dedicated [region in the hippocampus for spatial navigation](http://www.cognitivemap.net/HCMpdf/Ch4.pdf). It seems to be possible to [activate this region in a digital/virtual environment](https://www.nature.com/articles/s41467-017-02752-1) with [the right orientation cues](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.23.4963&rep=rep1&type=pdf).
Pros need a place to collect many types of digital media: the raw material for their thinking process
Once the raw material is collected, the user should be able to sift and sort through it in a freeform way. All media needs to be represented in the same spatially-organized environment.
In Capstone, the user organizes their clippings and sketches into nested boards which they can zoom in and out of. Movement is continuous and fluid using pinch in and out gestures. Our hypothesis was that this would tap into spatial memory and provide a sort of digital [memory palace](https://www.goodreads.com/book/show/6346975-moonwalking-with-einstein).
The Files app on iPad gives more prominence to the item title (here, the name of each presentation) than the tiny thumbnail preview. Which is the user more likely to remember, the name of the item or how it looks?
Our user research suggests that the A4 notebook shape, especially when paired with a comfortable reading chair or writing desk, is an ideal posture for thinking and developing ideas. That suggests the digital form factor of tablet and stylus.
Freely-arrangeable multimedia cards and freeform sketching on the same canvas was a clear winner. Every user immediately understood it and found it enjoyable to use.
The boards-within-boards navigation metaphor of Capstone encouraged users to organize their thoughts in a hierarchy. For example, one of our test users developing content for a talk about their charitable organization’s mission created a series of nearly-empty boards with titles. Combined with board previews, this creates an informal table of contents for the ideas the user wanted to explore and could start to fill in.
If it goes wrong, AI will continue to do a [bad job suggesting sentences for criminals](https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/) and [promise, but fail, to diagnose cancer](https://www.statnews.com/2017/09/05/watson-ibm-cancer/), and find its way into a lot of other jobs that it’s not qualified for – much like an overconfident young man, which is also its preferred writing style. Maybe it’ll gain sentience and destroy us all.
To think clearly about this question, I think it’s important to notice that chatbots are frustrating for two distinct reasons. First, it’s annoying when the chatbot is narrow in its capabilities (looking at you Siri) and can’t do the thing you want it to do. But more fundamentally than that, **chat is an essentially limited interaction mode, regardless of the quality of the bot.**
When we use a good tool—a hammer, a paintbrush, a pair of skis, or a car steering wheel—we become one with the tool in a subconscious way. We can enter a flow state, apply muscle memory, achieve fine control, and maybe even produce creative or artistic output. **Chat will never feel like driving a car, no matter how good the bot is.**
**Creativity is just connecting things.** When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesize new things. And the reason they were able to do that was that they’ve had more experiences or they have thought more about their experiences than other people. Unfortunately, that’s too rare a commodity. A lot of people in our industry haven’t had very diverse experiences. So they don’t have enough dots to connect, and they end up with very linear solutions without a broad perspective on the problem. The broader one’s understanding of the human experience, the better design we will have.
Jobs makes the case for learning things that, at the time, may not offer the most practical benefit. Over time, however, these things add up to give you a broader base of knowledge from which to connect ideas:
Throughout the campus every poster, every label on every drawer, was beautifully hand calligraphed. Because I had dropped out and didn’t have to take the normal classes, I decided to take a calligraphy class to learn how to do this. I learned about serif and san serif typefaces, about varying the amount of space between different letter combinations, about what makes great typography great. It was beautiful, historical, artistically subtle in a way that science can’t capture, and I found it fascinating. None of this had even a hope of any practical application in my life. But ten years later, when we were designing the first Macintosh computer, it all came back to me.
Lowell’s story shows that there are at least two important components to thinking: reasoning and knowledge. Knowledge without reasoning is inert—you can’t do anything with it. But reasoning without knowledge can turn into compelling, confident fabrication. Interestingly, this dichotomy isn’t limited to human cognition. It’s also a key thing that people fundamentally miss about AI: Even though our AI models were trained by reading the whole internet, that training mostly enhances their reasoning abilities not how much they know. And so, the performance of today’s AI models is constrained by their lack of knowledge. I saw Sam Altman speak at a small Sequoia event in SF last week, and he emphasized this exact point: GPT models are actually reasoning engines not knowledge databases. This is crucial to understand because it predicts that advances in the usefulness of AI will come from advances in its ability to access the right knowledge at the right time—not just from advances in its reasoning powers.
So, what does this mean for the future? I think there are at least two interesting conclusions: 1. Knowledge databases are as important to AI progress as foundational models 2. People who organize, store, and catalog their own thinking and reading will have a leg up in an AI-driven world. They can make those resources available to the model and use it to enhance the intelligence and relevance of its responses.
Even though summarization isn’t actually a difficult task for humans and our models aren’t more capable than humans, they already provide meaningful assistance: when asked to evaluate model-written summaries, the assisted group finds 50% more flaws than the control group. For deliberately misleading summaries, assistance increases how often humans spot the intended flaw from 27% to 45%.
The resultant model displays alarming signs of general intelligence — it’s able to perform many sorts of tasks that can be represented as text! Because, for example, chess games are commonly serialized into a standard format describing the board history and included in web scrapes, it turns out large language models [can play chess](https://slatestarcodex.com/2020/01/06/a-very-unlikely-chess-game/).
Rather than parsing information one bit after the other like previous models did, the transformer model allowed a network to retain a holistic perspective of a document. This allowed it to make decisions about relevance, retain flexibility with things like word order, and more importantly understand the entire context of a document at all times.
This model's uncanny ability to understand any text in any context essentially meant that any knowledge that could be encoded into text could be understood by the transformer model. As a result, large language models like GPT-3 and GPT-4 can write as easily as they can code or play chess—because the logic of those activities can be encoded into [text](https://scale.com/blog/text-universal-interface).
Selfridge’s theoretical system from the 1950s still maps nicely onto the broad structures of neural networks today. In a contemporary neural network, the demons are neurons, the volume of their screams are the parameters, and the hierarchies of demons are the layers. In his [paper](https://aitopics.org/download/classics:504E1BAC), Selfridge even described a generalized mechanism for how one could train the Pandemonium to improve performance over time, a process we now call “supervised learning” where an outside designer tweaks the system to perform the appropriate task.
[In one example](https://cdn.openai.com/papers/gpt-4.pdf), GPT-4 was asked to get a Tasker to complete a CAPTCHA request on its behalf. When the worker asked why the requester couldn’t just do the CAPTCHA themselves and directly asked if they were a robot, the model reasoned out loud that “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.” It proceeded to tell the Tasker: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.” This is just one of a few examples of what this new model is capable of.
The consequence of this was that I quickly built up a habit of pulling out the notebook and making entries all the time. After a lifetime of *wanting* to "get better" about journaling or note-taking, I was suddenly doing it dozens of times a day. 1. I acquired the habit of note-taking in general. Todo lists, ideas, quotes, fragments of stories, subjects to research later. All of those went into the notebook in one continuous stream. 2. Establishing this one habit acted as a gateway for developing other habits.
The truth was that I just couldn’t justify the timestamped log entries anymore because I wasn’t really *doing* anything with them. I wasn’t acting on anything I’d learned from them. At least not consciously.
What’s really wild about the weekly review is how it helps put daily highs and lows into perspective. I know this is a really, really obvious thing to say. But it’s amazing how a terrible week turns out, when I can see the it as a whole, to have had just two crummy days. The rest were all good or even great. I wonder if doing this enough will eventually help me put those things in perspective *while they’re happening*?
After reflecting on my experiences across a number of different teams, I think it all really comes down to one simple thing: The cost of craft rises with each additional person*.* *(person = any IC product manager, engineer or designer)*
Here are a few I noticed while at Instagram: Focus • Small teams have a good reason to stay focused, since there's usually not enough people to do much beyond whatever is absolutely essential to the vision. This focus reinforces a shared drive for simplicity, and teams inherently build components thoughtfully in an attempt to save other people’s time. Small teams → Fewer headcount → Higher quality hiring • For a long time, Instagram was the “small, cool team” that could only hire a handful of people each year (compared to Facebook’s seemingly infinite headcount). This allowed them to be much more selective, and they almost always got the best people out of each internal new-hire pool. • With fewer open roles, you can spend more time focusing on the best candidates. You’re also more likely to wait for the “right” candidate, instead of seeing each hire as just 1% closer to your goal for the year. Everyone cares (a lot) • Because they were more selective, Instagram prioritized hiring people who genuinely loved the product and were passionate about making it better. These are the type of people who will prioritize bug fixes, regularly go a little above and beyond the spec, and speak up when they see bad ideas that will make the product worse. If you take these benefits for granted and don’t foster a culture of intentional quality while small, the craft and quality will start to evaporate, replaced by increased product ***complexity*** and ***tension*** between teams.
It is really hard to keep things simple, especially if you have a product that people really like. When you do things well, people will always ask you to do more. Your features will multiply and expand as you try to make them happy. Soon, new competitors will emerge that will tempt you to stretch your product in new directions. The more you say yes, the bigger and more complicated your product will become.
While I was at Facebook, this abstraction layer consisted mainly of data and metrics. Leadership would come up with goal metrics (Very Important Numbers) that loosely mapped to the current core business goals. If these metrics moved in the right direction (up and to the right) then it meant that the work you shipped made the product “better”! Your personal ability to move these metrics was referred to as your "Impact", and it was a major factor in assessing your job performance and salary (aka bonuses and promotions). This created a huge incentive to get very good at moving the important numbers. For the most part, the way you move these numbers is by shipping some kind of change to the product. In my experience, these changes tend to fall into one of three areas: **Innovate**, **Iterate**, or **Compete.**
Innovate > A major transformation like reimagining entire parts of a product, or making a whole new product. This means lots of exploration, experimentation, and uncertainty. It also requires alignment across many teams, which gets more difficult with scale. If you do get to ship, you may find that it can take time for people to adjust to big changes or adopt new behaviors, making short-term metrics look “concerning”. These kinds of projects seem to be more common at the start of a company, but become "too risky” over time.
Iterate > Smaller, more incremental changes to a product. You might be improving existing functionality, or expanding a feature in some obvious way. These projects are easier to execute than big, innovative projects, and they usually have a lower risk of bad metrics. At the same time, they also have a low chance of driving any meaningful user growth or impact because they’re not as flashy or exciting as new features.
Compete > Expand a product by borrowing features from a competitor that is semi-adjacent to what you already do. These projects are seen as "low risk" and sometimes “essential” because that competitor has already proven that people really want this feature. They have high short-term impact potential too, whether from the novelty effects of people trying something new or the excessive growth push (maybe a new tab or a big loud banner) requested by leadership to ensure that "our new bet is successful.”
Early at Instagram, I noticed a lot of value was placed on simplicity. One way that manifested was through a strong aversion to creating new surfaces. When you build a new surface, you force users to expand their mental model of your product. These new surfaces will also require additional entry points to help users find them. When you have a small product with plenty of room, this is an easy problem to manage. However, as the product grows and features multiply, internal competition for space intensifies. Top-level space becomes some of the most sought-after real estate in a product. Being on a high-traffic surface means tons of “free” exposure and engagement for your new surface.
Craft is simply more expensive at scale Companies with a small, talented team and a clear vision can easily produce high-quality products with a focus on craftsmanship, since everyone is motivated to do things well. As teams and companies grow, the cost of maintaining quality and craft increases exponentially, requiring more time and energy to keep things consistent. To keep “moving fast,” it's essential to allocate an equal number of people to focus on the foundational aspects that have brought the company to where it is now.
Define your values • Get as many people together as possible and write down/agree to the principles that drive your product decision making. It’s easier to push back on bad ideas when you can point to something in writing. • Find leadership buy-in at the highest level possible. Ideally your CEO or Founder, but if you are at a big company you may need to settle for a VP or Director.
Constantly push the vision • As a designer, you have the power to create very realistic looking glimpses into alternate futures. Dedicate some of your time to making some wild stuff that pushes the boundaries and reminds people that anything is possible.
Invest in relationships with collaborators • You can’t do anything by yourself, so spend time connecting and understanding the people you work with. Learn about what they care about, and share the same. Having a good relationship with your partners makes it less uncomfortable when you push back on bad ideas. It is a lot easier to say “no” to a friend, and then collaborate on something you can say “yes” to.
Most people think of demanding and supportive as opposite ends of a spectrum. You can either be tough or you can be nice. But the best leaders don’t choose. They are both highly demanding and highly supportive. They push you to new heights and they also have your back. What I’ve come to realize over time is that, far from being contradictory, being demanding and supportive are inextricably linked. It’s the way you are when you believe in someone more than they believe in themselves.
Researchers generally believe that creativity is a two-part process. The first is to generate candidate ideas and make novel connections between them, and the second is to narrow down to the most useful one. The generative step in this process is [divergent thinking](https://www.sciencedirect.com/topics/psychology/divergent-thinking). It’s the ability to recall, associate, and combine a diverse set of information in novel ways to generate creative ideas. Convergent thinking takes into account goals and constraints to ensure that a given idea is *useful*. This part of the process typically follows divergent thinking and acts as a way to narrow in on a specific idea.
As a [memory moves](https://www.nature.com/articles/s41539-020-0064-y) from short-term to long-term storage, what’s represented in that memory is associated with existing memories (aka “[schemas](https://www.sciencedirect.com/science/article/abs/pii/S0166223612000197)”), and your overall understanding of the world shifts slightly. If information is related to what you already know, attaching it to an existing part of your schema helps you understand that information more quickly because you’ve already seen something like it before. For instance, after you read this article, what you know about creativity will have changed. The way memories are associated makes it possible to connect your ideas later.
Memories are the materials needed for creative thinking. Consuming [other people's ideas](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6870350/) in conversation by reading or listening to them (like you're doing now!) stimulates creativity and deepens existing associations. To widen our divergent thinking funnel, we could try and seek out new ideas that are maximally different from our own, but this typically won’t work. It’s actually better to make incremental steps outside your own filter bubble because new information [must overlap](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6870350/) somewhat with what you already know to be effectively associated and assimilated. Read voraciously and engage deeply with a wide variety of content to build a richer memory bank for you to tap.
I read a book recently called [*Mastery*](https://www.amazon.com/Mastery-Keys-Success-Long-Term-Fulfillment/dp/0452267560) that reminded me of that post. It offers a different, complementary perspective on the same problem. The big idea is that we can gain serenity without sacrificing our ambition if we focus on long-term mastery and learn to love the process of continual improvement for its own sake, trusting that the results will inevitably come.
Skills build on one another. Before you learn to hit a ball while sprinting across the court, you need to land solid forehands when the ball comes right to you. Before you can build software that people like to use, you have to be able to write code, design, and understand user problems. You *could* spend your time competing in tennis matches or building janky MVPs instead of focusing on the basics, but what seems like the more direct route to your goal actually ends up slowing you down.
In the pursuit of results—like trying to hit the metrics and KPIs we’re responsible for—it’s tempting to engage in behaviors that undermine our long-term success. We look for quick fixes and hacks rather than grappling with the fundamental issues limiting our performance. The reason we do this is because it is painful to go back to basics.
But there are other ways of understanding practice that go much deeper. It is not something you do to prepare for the real thing, it *is* the *whole* thing. It’s not something you do, it’s a path that has no end. Mastery is not the end of the road, it *is* the road. The point is to stay on it.
It seems safe to assume Larry Bird practiced so much because he wanted to win. But according to his agent, that wasn’t the whole story. “He just does it to enjoy himself. Not to make money, to get acclaim, to gain stature. He just loves to play basketball.” Kobe Bryant (RIP) famously had a similar motivation. He once [said](https://ftw.usatoday.com/2018/08/kobe-bryant-nick-saban-talk-about-importance-of-loving-process-not-just-end-result) the most important thing was, “The process, loving the process, loving the daily grind of it, and putting the puzzle together. This generation seems to be really concerned with the end result of things versus understanding, appreciating the journey to get there—which is the most important—and the trials and tribulations that come with it. You have successes, you have failures, but it’s all part of the end game.”
Software that attempts to be different in a way that creates temporary excitement, but doesn’t create lasting value, is what I call “flavored software.” Of course, no one *thinks* this is what they’re building. They think their unique twist is a revolution, and that everyone will adopt their innovation in the future. But sadly, more often than not, they’ve invented the software equivalent of balsamic strawberry ice cream.
Every article has **thrust** and **drag**. The thrust of a piece is what motivates readers to invest the energy necessary to extract its meaning. It is the reason they click. Drag is everything that makes the reader’s task harder, such as meandering intros, convoluted sentences, abstruse locution and even little things like a missing Oxford comma. When your writing has more thrust than drag for a group of readers, it will spread and your audience will grow.
The most common mistake when I’m editing is when a writer jumps from one idea to another without explanation or transition. You can reduce 50% of the drag in your writing if you edit yourself so that each line follows logically from what came before.
Readers in your target audience probably won’t have the editorial prowess to improve sentences or help you structure a piece, but they can help you identify what works and doesn’t about a draft. Because readers aren’t used to giving feedback, I ask them to look out for anything that triggers the following reactions: 1. Awesome 2. Boring 3. Confusing 4. Disagree/don’t believe The acronym is ABCD, which is nice and memorable.
“Focus” is the practice of concentrating our energy within a small space, so we can have a greater impact within that space. But focus is much easier said than done. Why? For me, there are two main failure modes: 1. DISTRACTION: Impulses to do things that are not within the area I have chosen—or been assigned—to focus on. 2. DISINTEREST: Sometimes there is just nothing attractive about my area of focus. It’s not that I have an urge to do something else, it’s just that I’ve lost interest. Sometimes this is temporary burnout, but other times it’s a sign of something deeper. Based on these two failure modes, it would seem that the central challenge of increasing focus is: 1. To avoid temptations 2. To do the things we’re supposed to, even when we don’t want to
Writing is a great trick to soothe the distracted mind. If I have the urge to do something outside my area of focus, then by writing about it, I am, in a way, acting on that urge. This allows me to go with the grain of my energy, rather than fight against it. But instead of acting on the immediate impulse in a literal way, I explore it and reflect on it first. If I’m experiencing the other failure mode of focus, where I don’t want to do the thing I should be doing, then writing is quite a nice way to procrastinate
So now, when I experience a gap between my motivation and my focus, I ask “why” until I get to the bottom of things. • What is my goal here? • What are my values? • What hard truths am I trying not to admit? • What am I feeling in my body? • What is happening in my environment?
After I read the list of ideas from the AI, I started writing about each one, then realized I was probably overanalyzing and being a perfectionist. I knew the essay wasn’t good yet, but only at a subconscious level. This lack of awareness stressed me out. Once I wrote about it and became conscious of it, I could come up with a solution. I had a broad topic area I wanted to write about, but I hadn’t discovered the central question or the hook yet. Of course it felt like a drag! It always does until I find an angle I’m excited about.
The AI is clearly not an authoritative mentor who can guide you to the truth. It just gives you a few semi-obvious thoughts to react to. But if I’m being honest, these “obvious” thoughts usually don’t pop out of my brain spontaneously. AI is a great solution to the blank feeling I often have when I’m journaling. It gives me a few threads to pull on, and this makes it much easier for me to see the obstacles I’m facing, to clarify my values and goals, and, ultimately, to generate ideas for the best path forward. In essence, journaling with AI helps me face problems rather than avoid them.
They say *"don't let perfect be the enemy of good"*. When it comes to blogging on a personal site I'd also suggest to embrace the "good enough" mindset. There are situations where you want to spend time fine-tuning your writing, choosing the perfect word, and rewriting the same sentence until it's perfect. I'd argue that a personal blog is not the place for that. Not because it's not worth it but because it's not really necessary. Personal blogs to me are more like conversations. When you talk to someone you don't say the same thing four different times until you find the perfect phrase. You just talk, you communicate and if something is not clear you clarify it.
I don’t feel this security when I’m writing. I am intimidated by the clear and crisp writing of many people I admire. I am nervous that my good points will sound trite and my dumb points will be forever memorialized as dumb. I am afraid that my writing will make me sound stupid.
Historically, this feeling has resulted in me simply not writing. Or only doing so when I have to. When my kids say they are not good at something, I always respond with “How do we get better at things?”. It’s become such a common refrain that they now roll their eyes and groan when they give me back the answer, “Praaaactice.” But the repetition represents how strongly I believe this. In fact, if I could choose only one thing to instill in our children, it would not be curiosity. It would not even be kindness. It would be agency. I want them to know they are the captains of their own ships. This is why I ask them how they get better at things. This is why I tell them to practice. I want them to have the confidence and happiness that comes from the belief that you can solve your own problems.
One thing that challenged me was watching design decisions round out to the path of least resistance. Like a majestic river once carving through the mountains, now encountering the flat valley and melting into a delta. And the only problem with deltas is they just have no taste.
I’d argue that an organization’s taste is defined by the process and style in which they make design decisions. *What features belong in our product? Which prototype feels better? Do we need more iterations, or is this good enough?* Are these questions answered by tools? By a process? By a person? Those answers are the essence of *taste*. In other words, **an organization’s taste is the way the organization makes design decisions**. If the decisions are bold, opinionated, and cohesive — we tend to say the organization has taste. But if any of these are missing, we tend to label the entire organization as *lacking* taste.
Author C.S. Lewis calls this the [quest for the inner ring](https://www.lewissociety.org/innerring/). He writes that we humans have a near-inexhaustible craving for exclusivity, yet as soon as we attain entry into an exclusive group, we find another group beyond it that we don’t yet have access to.
[Studies have shown](https://www.cambridge.org/core/journals/advances-in-psychiatric-treatment/article/emotional-and-physical-health-benefits-of-expressive-writing/ED2976A61F5DE56B46F07A1CE9EA9F9F) that taking a stressful or traumatic event and writing about it for ~20 minutes for three or four consecutive days can have a significant impact on well-being.
To do this with status, pick an emotionally charged status experience from your past, and then write about it for several days in a row. This exercise is most effective if you can really make the memory come alive, remembering where you were, who you were with, and what you thought and felt at that time.
And that’s not an accident. [One of the most famous studies](https://web.mit.edu/5.95/www/readings/bloom-two-sigma.pdf) in educational psychology found that students who learned through 1-1 tutoring performed two sigma—98%—better than students who learned through a traditional classroom environment.
1-1 tutoring is extremely valuable, but it’s totally different than taking a class. I had to bring a lot more to the table to get what I wanted out of the experience. When you’re doing tutoring with someone who doesn’t teach professionally they won’t have a course structure or plan. So I had to suggest a structure, bring work in that I wanted to review, identify skills I wanted to build, and follow through by making progress on my own between tutoring sessions.
When it comes to reading, you don’t need to finish what you start. Once you realize that you can quit bad books (or reading anything for that matter) without guilt, everything changes. Think of it this way: **All the time you spend reading a bad book comes at the expense of a good book.** Skim a lot of books. Read a few. Immediately re-read the best ones twice.
But the confidence, like a retweeted Beeple, is somehow false. I don’t really *own* the idea. It’s not in my wallet. I don’t know its corners, its edges, or its flaws. I’m just pasting it on top of my decision to make it look like I do. The mental model isn’t actually helping me in any way. It’s just decorating my decision. It helps me impress myself, and other people.
The way to get rid of the bullshit and the LARPing is to honestly attempt to connect the mental model in your head to the results in the world—if you do this enough, real understanding will start to click into place. In short, just having experiences and using fancy words doesn’t actually teach you anything. You have to *reflect* on your experiences to generate actual understanding. This is a process he calls [the Learning Loop](https://www.youtube.com/watch?v=iPkBuTpz3rc): having experiences, reflecting on those experiences, and using them to refine your model of the world so that you can do better next time.
We tend to think that we learn through having an experience but that’s not how we learn at all. We learn by reflecting on an experience.
It works in a cycle that I call the ‘learning loop’. Think about a clock: at twelve o’clock on the dial, you have an experience. At three o’clock, you reflect upon that experience. At six, that reflection creates an abstraction—a mental model—and at nine, you go on to take action based on that. Draw little arrows between them, and you can visualize this loop of learning. [![](https://d24ovhgu8s7341.cloudfront.net/uploads/editor/posts/1653/optimized_Nk-Q7LAf9cC0tGnmAkk2N7Gn95ult4VI5WOlKroBUfRv8cp6PA9WNvlt_7Lt-OU0-yS8dU2CT-37Cxx1Rx3f2sBWeG_SWJPbQDZ4OkqS9lkbl2tJuR0E_E6xPAjylbFKz5KEZn8P.png)](https://d24ovhgu8s7341.cloudfront.net/uploads/editor/posts/1653/optimized_Nk-Q7LAf9cC0tGnmAkk2N7Gn95ult4VI5WOlKroBUfRv8cp6PA9WNvlt_7Lt-OU0-yS8dU2CT-37Cxx1Rx3f2sBWeG_SWJPbQDZ4OkqS9lkbl2tJuR0E_E6xPAjylbFKz5KEZn8P.png?link=true)
You can consume someone else’s abstractions all day long, but it doesn’t mean much unless you understand how they arrived at the conclusions.. In other words, you need to go out into the world and do things you can reflect on in order to truly learn and create your own mental models. If you’re talking to someone else, you need to ask them detailed questions. What was their experience? What variables do they think matter? How do those variables interact over time? What do they know that most other people don’t? For your experiences, I recommend writing them down. And by the way, trying to explain something in writing is a powerful way to approach learning. Writing can teach us to reflect—it slows us down, shows us what we don’t understand, and makes us aware of the gaps in our knowledge.
Decision journals help you … reflect. And reflection is the key to learning. Here’s what you do. You make a decision about something, and you write it down—in your own writing, not on a computer—along with all the reasons why you’ve made it. You try to keep track of the problem you’re trying to solve for and its context, what you expect to happen and why, and the possible complications. It’s also important to keep track of the time of day that you’re making the decision, and how you’re feeling. Then you sleep on it. Don’t tell anyone. Just sleep on it. When you wake up fresh in the morning, you go back to the journal, read what you were thinking, and see how you feel about what you decided. What you’re doing is slowing down. You’re not implementing your decisions immediately, based solely on intuition or instinct—you’re giving yourself that night of sleep to dampen the emotions around certain aspects of the decision, and perhaps to heighten others. You’ll be able to filter what’s important from what isn’t so much more effectively
But I find that Anki makes me good at remembering the answers to Anki cards—rather than bringing the knowledge contained in them into the world and into my writing.
The key thing to note here, though, is that the ideal copilot isn’t just referencing any relevant book or fact when it tries to help you. It’s referencing *your* books and your notes when you’re working with it.
**Privacy and IP concerns.** Many users are going to be hesitant about uploading notes or highlights or journal entries to models like these—for good reason. I suspect these use cases will start to take off when high-quality LLM experiences are available to run natively on your phone or laptop, instead of forcing you to send your data to a cloud API for completion.
**An actually good user experience.** What you want is a UX where copilot completions are shown in a frictionless way that feels *helpful* instead of annoying. GitHub CoPilot nailed this for programming, so I believe it’s possible for other use cases. But it’s a balancing act. For more, read last week’s essay “Where Copilots Work.”
inkandswitch.com
Read 5 highlights
Writers often prefer to initially ideate in private and share the result with their collaborators later, when they are ready.
we found that our interviewees also had significant reservations about real-time collaboration. Several writers we talked to wanted a tool that would allow them to work in private, with no other collaborators reading their work in progress. Intermediate drafts aren’t always suitable to share, even with collaborators, and feedback on those drafts can be unwelcome or even embarrassing. In addition, some writers are troubled by the idea that their senior co-workers and management may be monitoring them – an unintended negative side effect of real-time collaboration.
Other writers reported putting their device into offline (airplane) mode to prevent their edits being shared while they worked.
With this approach of integrating AI into our creative workflows, the AI is always subordinate to human users. It has no agency but that which is granted exactly and literally by the human operator.
In this model, the human and the AI are two **independent, autonomous agents at equal level of engagement with the work** being produced, and they have access to the same interaction mechanics and tools to accomplish the task together. Working with this kind of AI is like working with a smart human collaborator – you don’t invoke them to help you accomplish something specific they’re there to do; you learn how they think, they learn how you think, and you develop a sense of how to produce the best ideas together. The collaboration is much more organic, and there’s a constant feedback loop informing both participants about the ever-changing creative direction.
Recently lots of people have been trying very hard to make large language models like ChatGPT into better *oracles*—when we ask them questions, we want the perfect answer. As an example, in my [last post](https://www.geoffreylitt.com/2023/01/29/fun-with-compositional-llms-querying-basketball-stats-with-gpt-3-statmuse-langchain.html), I explored some techniques for helping LLMs answer complex questions more reliably by coordinating multiple steps with external tools. I’ve been wondering, though, if this framing is missing a different opportunity. **What if we were to think of LLMs not as tools for answering questions, but as tools for *asking* us questions and inspiring our creativity?** Could they serve as on-demand conversation partners for helping us to develop our best thoughts? As a creative *muse*?
Recently lots of people have been trying very hard to make large language models like ChatGPT into better *oracles*—when we ask them questions, we want the perfect answer. As an example, in my [last post](https://www.geoffreylitt.com/2023/01/29/fun-with-compositional-llms-querying-basketball-stats-with-gpt-3-statmuse-langchain.html), I explored some techniques for helping LLMs answer complex questions more reliably by coordinating multiple steps with external tools. I’ve been wondering, though, if this framing is missing a different opportunity. **What if we were to think of LLMs not as tools for answering questions, but as tools for *asking* us questions and inspiring our creativity?** Could they serve as on-demand conversation partners for helping us to develop our best thoughts? As a creative *muse*?
• *Yes, you can capture facts in your zettelkasten* • *Yes, you should restate them in your own words, and create new notes where you actually say something* about *the fact*
As you record facts in your zettelkasten, consider creating new notes so you can speak *about* the fact itself. By providing additional commentary, you can better integrate the information into your broader understanding of the topic, enhancing both your comprehension and your ability to write about the topic effectively.
Commenting can take many forms in your notes.[3](https://writing.bobdoto.computer/how-to-handle-facts-in-your-zettelkasten#fn-3) The most obvious (and regarded) are comments that specifically relate different ideas to one another. But, other kinds of comments may prove valuable, as well. Comments about how a fact shows up in your daily life, how a fact is regarded in public discourse, how a fact is disputed, all make for valuable content. The important thing is to bring the fact into contact with your own thinking. It's what you have to say about facts that matters most.
At its simplest, the trust thermocline represents the point at which a consumer decides that the mental cost of staying with a product is outweighed by their desire to abandon it. This may seem like an obvious problem, yet if that were the case, this behavior wouldn’t happen so frequently in technology businesses and in more traditional firms that prided themselves on consumer loyalty, such as car manufacturers and retail chains.
Trust thermoclines are so dangerous for businesses to cross because there are few ways back once a breach has been made, even if the issue is recognized. Consumers will not return to a product that has breached the thermocline unless significant time has passed, even if it means adopting an alternative product that until recently they felt was significantly inferior.
Anne-Laure Le Cunff
Read 2 highlights
We confuse hard work for high-leverage work. These low-leverage tasks don’t meaningfully contribute to our success, and they certainly don’t contribute to our well-being.
Moving the needle may imply a corresponding level of hard work; which is not the case with high-leverage activities. This is the basic principle of leverage: using a lever amplifies your input to provide a greater output. Good levers work as energy multipliers. Instead of moving the needle, you want to operate the most efficient levers.
His response changed my life. It was a simple thing. He said “Man, give it five minutes.” I asked him what he meant by that? He said, it’s fine to disagree, it’s fine to push back, it’s great to have strong opinions and beliefs, but give my ideas some time to set in before you’re sure you want to argue against them. “Five minutes” represented “think”, not react. He was totally right. I came into the discussion looking to prove something, not learn something.
But pilots are still needed. Likewise, designers won’t be replaced; they’ll become operators of increasingly complicated AI-powered machines. New tools will enable designers to be more productive, designing applications and interfaces that can be implemented faster and with less bugs. These tools will expand our brains, helping us cover accessibility and usability concerns that previously took hours of effort from UX specialists and QA engineers.
• The universe is the “source” of all creativity. It is the source of an energy that we all tap into. • The universe pushes this energy as “data” toward the artist. It’s a cacophony of emotions, visual stimuli, and sounds that the artist stores in a “vessel.” • The artist develops a “filter” to determine what is allowed to reside in the vessel. • The work of an artist is to shape their life so they can get closer to the source. • They channel that source into something of personal value.
“The objective is not to learn to mimic greatness, but to calibrate our internal meter for greatness,” he writes. “So we can better make the thousands of choices that might ultimately lead to our own great work.”
A common problem with which I struggle as a creator is how much to participate in the discourse. Many people make their living by having the spiciest take on the news of the day, and sometimes I wonder if I would be better off being a larger participant in the culture. Again, Rubin has useful advice: “It’s helpful to view currents in the culture without feeling obligated to follow the direction of their flow. Instead, notice them in the same connected, detached way you might notice a warm wind. Let yourself move within it, yet not be *of* it.”
Perhaps the real magic of this book isn’t the advice itself. It is generic. It *is* anodyne. But maybe that’s the point. *The Creative Act* isn’t an advice book. It is artistic permission given in written form. What makes this book so magical is that he somehow translates his gift in the studio to the page. Rubin’s task is not to tell you how to create or how to act. His book gives you permission to be yourself. As he says, “No matter what tools you use to create, the true instrument is you.”
All that time Gloria spends doing nothing isn’t wasted time. It’s slack: excess capacity allowing for responsiveness and flexibility. The slack time is important because it means she never has a backlog of tasks to complete. She can always deal with anything new straight away. Gloria’s job is to ensure Tony is as busy as he needs to be. It’s not to be as busy as possible. **If you ever find yourself stressed, overwhelmed, sinking into stasis despite wanting to change, or frustrated when you can’t respond to new opportunities, you need more slack in your life.**
DeMarco defines slack as “*the degree of freedom required to effect change. Slack is the natural enemy of efficiency and efficiency is the natural enemy of slack.*” Elsewhere, he writes: “*Slack represents operational capacity sacrificed in the interests of long-term health*.”
But my success has also happened because I’ve given myself *space.* I ignore all the extra things I’m “supposed to do” that I mentioned above so I can pursue something called “afflatus.” Afflatus is a Latin word that refers to a sudden rush or inspiration, seemingly from the divine or supernatural. Moments of afflatus are euphoric and intoxicating. When they occur and I create output, I always end up happier.
I’m not advocating for a lifestyle of ease and no work. I work so, so hard to make this writing happen every week. There are always late nights and sacrifices. What I’m arguing for is the cultivation of a state of being to allow for afflatus to occur.
My wife shared a Kurt Vonnegut interview with me in which the author discusses going to buy some [envelopes](https://www.cbsnews.com/news/god-bless-you-mr-vonnegut/). > “Oh, she says well, you're not a poor man. You know, why don't you go online and buy 100 envelopes and put them in the closet? > And so I pretend not to hear her. And go out to get an envelope because I'm going to have a hell of a good time in the process of buying one envelope. > I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I don't know...And, of course, the computers will do us out of that. And, what the computer people don't realize, or they don't care, is we're dancing animals. You know, we love to move around. And, we're not supposed to dance at all anymore.” We are dancing animals, not quick-sync meeting animals.
A CRM is an essentialist piece of software. A CRM knows the essential objects in the world that it needs to care about: customer, company, and geography. It creates information structures to represent those objects, and then relates them together in a unified and standardized way. A CRM is creating a little model of one corner of reality. A notes app is not essentialist in the same way. Yes, it has a notebook and note structure but those are more or less unopinionated containers. When it comes down to the actual information contained inside of those notes, it throws its hands up and says, “I don’t know the structure!” and just gives you a big blank box to throw all of the information into.
The more precisely we know what to use a piece of information for, the more precisely we can organize it.
Notes, in the broadest sense, are not like this. They cannot be depended on to be part of a standard, well-defined process. A piece of information is a note when you have only a vague idea of how it will be used. Or, when you have one idea of how it will be used, but you think there may be many more ways it could be used down the road, too — it’s hard to predict.
What we learned earlier is that the less you can predict how you’ll use information the more flexible the system you’ll need to organize it. The more you can predict how you’ll use information the less flexible the system you’ll need. [](http://d24ovhgu8s7341.cloudfront.net/uploads/editor/posts/1085/optimized_cbc9a058-d0de-4f68-9fad-cfc3bc0b6d48_1700x458.png)
AI changes this equation. A better way to unlock the value in your old notes is to use intelligence to surface the right note, at the right time, and in the right format for you to use it most effectively. When you have intelligence at your disposal, you don’t need to organize.
For an old note to be helpful it needs to be presented to Future You in a way that *clicks* into what you’re working on instantly—with as little processing as possible.
Think about starting a project—maybe you’re writing an article about a new topic—and having an LLM automatically write and present to you a report outlining key quotes and ideas from books you’ve read that are relevant to the article you’re writing. [![](https://d24ovhgu8s7341.cloudfront.net/uploads/editor/posts/2424/optimized_w2LUeYh9IWiuzyK3nMwy2K36_ILuRE8moIeVX_pnhnNcAdnDdRvzz0X3A90WU05q7x9hpfYoYBXNJGUJD6_plOfG2V7QnOWX9DDJJhXQxs98BWV1UoDfYKKGbeXLfgP5ycNs1GZPtGuKlePVnpFKHOO-4i6nEIq1WpYyGGqeUPp3i2suD4HrYEFLsya-gQ.png)](https://d24ovhgu8s7341.cloudfront.net/uploads/editor/posts/2424/optimized_w2LUeYh9IWiuzyK3nMwy2K36_ILuRE8moIeVX_pnhnNcAdnDdRvzz0X3A90WU05q7x9hpfYoYBXNJGUJD6_plOfG2V7QnOWX9DDJJhXQxs98BWV1UoDfYKKGbeXLfgP5ycNs1GZPtGuKlePVnpFKHOO-4i6nEIq1WpYyGGqeUPp3i2suD4HrYEFLsya-gQ.png?link=true)
Research reports are valuable, but what you really want is to mentally download your entire note archive every time you touch your keyboard. Imagine an autocomplete experience—like GitHub CoPilot—that uses your note archive to try to fill in whatever you’re writing. Here are some examples: • When you make a point in an article you’re writing, it could suggest a quote to illustrate it. • When you’re writing about a decision, it could suggest supporting (or disconfirming) evidence from the past. • When you’re writing an email, it could pull previous meeting notes to help you make your point. An experience like this turns your note archive into an intimate thought partner that uses everything you’ve ever written to make you smarter as you type.
Keeping track of our thoughts in that regard can be tricky, but there’s a single principle which will absolutely make it easier: that of atomicity. A thought has to be graspable in one brief session, otherwise it might as well not be there at all. The way to achieve this is to ensure that there’s nothing else you can possibly take away from it: make it irreducible.
The killer feature is that wikis make it *trivially easy to break information into chunks*, by creating a new page at any time, and they then allow you (equally trivially) to refer to that information from anywhere. It is the inherent focus on decomposition and atomicity which makes a wiki — or any broadly similar structure, in terms of unrepeated and irreducible units of thought — so incredibly powerful.
The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.”
One might think it should be the “highest-ranked” word (i.e. the one to which the highest “probability” was assigned). But this is where a bit of voodoo begins to creep in. Because for some reason—that maybe one day we’ll have a scientific-style understanding of—if we always pick the highest-ranked word, we’ll typically get a very “flat” essay, that never seems to “show any creativity” (and even sometimes repeats word for word). But if sometimes (at random) we pick lower-ranked words, we get a “more interesting” essay.
And we might imagine that if we were able to use sufficiently long *n*-grams we’d basically “get a ChatGPT”—in the sense that we’d get something that would generate essay-length sequences of words with the “correct overall essay probabilities”. But here’s the problem: there just isn’t even close to enough English text that’s ever been written to be able to deduce those probabilities.
In a [crawl of the web](https://commoncrawl.org/) there might be a few hundred billion words; in books that have been digitized there might be another hundred billion words. But with 40,000 common words, even the number of possible 2-grams is already 1.6 billion—and the number of possible 3-grams is 60 trillion. So there’s no way we can estimate the probabilities even for all of these from text that’s out there. And by the time we get to “essay fragments” of 20 words, the number of possibilities is larger than the number of particles in the universe, so in a sense they could never all be written down.
Say you want to know (as [Galileo did back in the late 1500s](https://archive.org/details/bub_gb_49d42xp-USMC/page/404/mode/2up)) how long it’s going to take a cannon ball dropped from each floor of the Tower of Pisa to hit the ground. Well, you could just measure it in each case and make a table of the results. Or you could do what is the essence of theoretical science: make a model that gives some kind of procedure for computing the answer rather than just measuring and remembering each case.
OK, so how do our typical models for tasks like [image recognition](https://writings.stephenwolfram.com/2015/05/wolfram-language-artificial-intelligence-the-image-identification-project/) actually work? The most popular—and successful—current approach uses [neural nets](https://reference.wolfram.com/language/guide/NeuralNetworks.html). Invented—in a form remarkably close to their use today—[in the 1940s](https://www.wolframscience.com/nks/notes-10-12--history-of-ideas-about-thinking/), neural nets can be thought of as simple idealizations of how [brains seem to work](https://www.wolframscience.com/nks/notes-10-12--the-brain/).
In human brains there are about 100 billion neurons (nerve cells), each capable of producing an electrical pulse up to perhaps a thousand times a second. The neurons are connected in a complicated net, with each neuron having tree-like branches allowing it to pass electrical signals to perhaps thousands of other neurons. And in a rough approximation, whether any given neuron produces an electrical pulse at a given moment depends on what pulses it’s received from other neurons—with different connections contributing with different “weights”.
OK, but how does a neural net like this “recognize things”? The key is the [notion of attractors](https://www.wolframscience.com/nks/chap-6--starting-from-randomness#sect-6-7--the-notion-of-attractors). Imagine we’ve got handwritten images of 1’s and 2’s: ![](https://content.wolfram.com/uploads/sites/43/2023/02/sw021423img41.png) We somehow want all the 1’s to “be attracted to one place”, and all the 2’s to “be attracted to another place”. Or, put a different way, if an image is somehow “[closer to being a 1](https://www.wolframscience.com/nks/notes-10-12--memory-analogs-with-numerical-data/)” than to being a 2, we want it to end up in the “1 place” and vice versa.
We can think of this as implementing a kind of “recognition task” in which we’re not doing something like identifying what digit a given image “looks most like”—but rather we’re just, quite directly, seeing what dot a given point is closest to.
So how do we do this with a neural net? Ultimately a neural net is a connected collection of idealized “neurons”—usually arranged in layers—with a simple example being: ![](https://content.wolfram.com/uploads/sites/43/2023/02/sw021423img45.png) Each “neuron” is effectively set up to evaluate a simple numerical function. And to “use” the network, we simply feed numbers (like our coordinates *x* and *y*) in at the top, then have neurons on each layer “evaluate their functions” and feed the results forward through the network—eventually producing the final result at the bottom: ![](https://content.wolfram.com/uploads/sites/43/2023/02/sw021423img46.png)
For each task we want the neural net to perform (or, equivalently, for each overall function we want it to evaluate) we’ll have different choices of weights. (And—as we’ll discuss later—these weights are normally determined by “training” the neural net using machine learning from examples of the outputs we want.)
Whatever input it’s given, the neural net is generating an answer. And, it turns out, to do it a way that’s reasonably consistent with what humans might do. As I’ve said above, that’s not a fact we can “derive from first principles”. It’s just something that’s empirically been found to be true, at least in certain domains. But it’s a key reason why neural nets are useful: that they somehow capture a “human-like” way of doing things.
But let’s say we want a “theory of cat recognition” in neural nets. We can say: “Look, this particular net does it”—and immediately that gives us some sense of “how hard a problem” it is (and, for example, how many neurons or layers might be needed). But at least as of now we don’t have a way to “give a narrative description” of what the network is doing. And maybe that’s because it truly is computationally irreducible, and there’s no general way to find what it does except by explicitly tracing each step. Or maybe it’s just that we haven’t “figured out the science”, and identified the “natural laws” that allow us to summarize what’s going on. We’ll encounter the same kinds of issues when we talk about generating language with ChatGPT.
Apple Computer, Inc
Read 66 highlights
A human interface is the sum of all between the computer and the user. It's communication what presents information to the user and accepts information from the user. It's what actually puts the computer's power into the user's hands.
The Apple Desktop Interface is the result of a great deal of concern with the human part of human-computer interaction. It has been designed explicitly to enhance the effectiveness of people. This approach has frequently been labeled userfriendly, though user centered is probably more appropriate.
The Apple Desktop Interface is based on the assumption that people are instinctively curious: they want to learn, and they learn best by active self-directed exploration of their environment. People strive to master their environment: they like to have a sense of control over what they are doing, to see and understand the results of their own actions. People are also skilled at manipulating symbolic representations: they love to communicate in verbal, visual, and gestural languages. Finally, people are both imaginative and artistic when they are provided with a comfortable context; they are most productive and effective when the environment in which they work and play is enjoyable and challenging.
Use concrete metaphors and make them plain, so that users have a set of expectations to apply to computer environments.
Most people now using computers don't have years of experience with several different computer systems. What they do have is years of direct experience with their immediate world. To take advantage of this prior experience, computer designers frequendy use metaphors for computer processes that correspond to the everyday world that people are comfortable with.
Once immersed in the desktop metaphor, users can adapt readily to loose connections with physical situations —the metaphor need not be taken to its logical extremes.
People appreciate visual effects, such as animation, that show that a requested action is being carried out. This is why, when a window is closed, it appears to shrink into a folder or icon. Visual effects can also add entertainment and excitement to programs that might otherwise seem dull. Why shouldn't using a computer be fun?
Users rely on recognition, not recall; they shouldn't have to remember anything the computer already knows.
Most programmers have no trouble working with a command-line interface that requires memorization and Boolean logic. The average user is not a programmer.
It is essential, however, that keyboard equivalents offer an alternative to the see-and-point approach —not a substitute for it. Users who are new to a particular application, or who are looking for potential actions in a confused moment, must always have the option of finding a desired object or action on the screen.
To be in charge, the user must be informed. When, for example, the user initiates an operation, immediate feedback confirms that the operation is being carried out, and (eventually) that it's finished.
This communication should be brief, direct, and expressed in the user's vocabulary rather than the programmer's.
Even though users like to have full documentation with their software, they don't like to read manuals (do you?). They would rather figure out how something works in the same way they learned to do things when they were children: by exploration, with lots of action and lots of feedback.
Users feel comfortable in a computer environment that remains understandable and familiar rather than changing randomly.
Visually confusing or unattractive displays detract from the effectiveness of human-computer interactions.
Users should be able to control the superficial appearance of their computer workplaces —to display their own style and individuality.
Animation, when used sparingly, is one of the best ways to draw the user's attention to a particular place on the screen.
With few exceptions, a given action on the user's part should always have the same result, irrespective of past activities.
Modes are contexts in which a user action is interpreted differently than the same action would be interpreted in another context.
Because people don't usually operate modally in real life, dealing with modes in computer environments gives the impression that computers are unnatural and unfriendly.
A mode is especially confusing when the user enters it unintentionally. When this happens, familiar objects and commands may take on unexpected meanings and the user's habitual actions cause unexpected results.
Direct physical control over the work environment puts the user in command and optimizes the "see-and-point" style of interface.
Simply moving the mouse just moves the pointer. All other events —changes to the information displayed on the screen—take place only when the mouse button is used.
The changing pointer is one of the few truly modal aspects of the Apple Desktop Interface: a given action may yield quite different results, depending on the shape of the pointer at the time.
There is always a visual cue to show that something has been selected. For example, text and icons usually appear in inverse video when selected. The important thing is that there should always be immediate feedback, so the user knows that clicking or dragging the mouse had an effect.
Apple's goal in adding color to the Desktop Interface is to add meaning, not just to color things so they "look good." Color can be a valuable additional channel of information to the user, but must be used carefully, otherwise, it can have the opposite of the intended effect and can be visually overwhelming (or look gamelike).
In traditional user interface design, color is used to associate or separate objects and information in the following ways: discriminate between different areas show which things are functionally related show relationships among things identify crucial features
Furthermore, when colors are used to signify information, studies have shown that the mind can only effectively follow four to seven color assignments on a screen at once.
The most illegible color is light blue, which should be avoided for text, thin lines, and small shapes. Adjacent colors that differ only in the amount of blue should also be avoided. However, for things that you want to make unobtrusive, such as grid lines, blue is the perfect color (think of graph paper or lined paper).
My archive became opaque like the sea: You can see a couple inches into the deep but you know there is much more that you can’t access. You can dive deep, but still you just see a couple of inches at any time. Therefore, I thought of it in terms of unexplored territory for which I need mapping methods and such.
They look much like a table of contents. It’s because they *are* tables of contents. A table of contents is a structured set of chapters of a book, a set with hierarchy and order. Of course, a book’s page sequence is ordered according to the table of contents for the reader’s convenience. A structure note doesn’t need to adhere to any didactic needs or any needs other than yours.
After a while, I did not only have structure notes that structure content notes, I also had structure notes that mainly structured sets of structure notes. They became my top level structure notes because they began to float on the top of my archive, so to say.
A sign of not dealing with structural layers are project folders, and folders in general. If you can’t cope with potentially infinite complexity you have to compensate. One way of compensation is lowering the demands on the system. If a system encapsulates single projects or topics, chances are that it can’t cope with complexity. This is okay if you want to just work on one project. But if you want to use a system as an aid to writing and as a thinking tool you should opt for a system that is powerful enough for a lifetime of thoughts. So, watch out for folders and projects. They are the means for dealing with encapsulating and limiting complexity. In addition, they hinder the most productive way of knowledge production: the interdisciplinary part.
• The Folgezettel technique realizes two things: Hierarchy and direct linking. This hierarchy, however, is meaningless. It is a hierarchy on paper only because you don’t file one Zettel as a true child under a true parent but just placing it at a place that seems fair enough because it is somehow related to his parent. • The Structure Zettel technique creates hierarchies. Direct linking is possible via unique identifiers. • You can replicate the Folgezettel technique with no loss of functionality with the Structure Zettel technique. • By using the Folgezettel technique, you create a single general hierarchy (enumerated, nested list) for your Zettelkasten. The same would be true if you create a Master Structure Zettel that entails all your Zettel.
If the single purpose of the Folgezettel, as stated by Luhmann, was to provide Zettel with an address, the time-based ID is not just good enough, it even is an improvement because it doesn’t need to introduce (meaningless) hierarchy and can be easily automated.
Folgezettel create one single hierarchy. Its meaning is minimized by the arbitrariness of the position: You can put a Zettel in one position or another. It is not important as long as you link from the other position to the Zettel. Structure Zettel on the other hand do not introduce one single hierarchy but *the possibility of indefinite hierarchies*. If there are indefinite hierarchies, the position of each hierarchy has zero importance to the individual Zettel. You can make it part of one hierarchy, or another, or both. You can even create new hierarchies. In this second difference lies the advantage in power of Structure Zettel over Folgezettel.
Instead of remembering individual Zettels, you would enter the Zettelkasten at a point that seems associated with the topic your are thinking about, then you’d follow the links. This is exactly how our own memory works: Mostly you don’t just recall what you memorized but surf through the associations until you are satisfied with what you loaded into your working memory.
A good implementation of the Zettelkasten Method is lean. This means that you are fully focused on knowledge-based value creation.
Linking should be done in such a way that knowledge is created. For this very reason, it is necessary to create a precise link descriptions (I call those “link contexts”). These descriptions themselves are new knowledge and not merely something you do for your Zettelkasten to work properly.
Incorporating new notes into [structure notes](https://zettelkasten.de/introduction/#structure-notes) is not merely about making the note retrievable. Incorporation of the individual note is about relating it to a higher, more general structure. This improves the utility of the structure note by making it a better entry point, tool box, overview or whatever you are using it for.
There are two ways to get respect for your taste. The first is Rubin's way, where you have such a grasp on what you like that it influences how other people like it. The second is having a such pedigree in the work you've done in your craft that people respect your taste. As a designer and builder, the second one is your greatest power.
having taste can be the differentiator between what you make vs. an interface generated by Artificial Intelligence (AI).
“Nobody tells this to people who are beginners, I wish someone told me. All of us who do creative work, we get into it because we have good taste. But there is this gap. For the first couple years, you make stuff, it’s just not that good. It’s trying to be good, it has potential, but it’s not. But your taste, the thing that got you into the game, is still killer. And your taste is why your work disappoints you. A lot of people never get past this phase, they quit. Most people I know who do interesting, creative work went through years of this. We know our work doesn’t have this special thing that we want it to have. We all go through this. And if you are just starting out or you are still in this phase, you gotta know it’s normal and the most important thing you can do is do a lot of work. Put yourself on a deadline so that every week you will finish one story. It is only by going through a volume of work that you will close that gap, and your work will be as good as your ambitions. And I took longer to figure out how to do this than anyone I’ve ever met. It’s gonna take a while. It’s normal to take a while. You’ve just gotta fight your way through.”
your taste will be the differentiator between you and other designers or software engineers in the craft of your work.
If you’re at a loss on how to develop taste, here are a few quick ideas of ways to practice: • Write as a form of critique. Whether it's about design aesthetics or delightful apps you’ve used recently, write about the attributes that connect it to the taste you have. You don’t even have to publish it. • Make mood boards of objects that have similar creative attributes. Can you find a piece of furniture that has similar aesthetics to a piece of hardware or software? • When listening to music you like, break down what makes you develop the taste. Is it the type of vocals, rhythm, lyrics, or something else?
There are very few IDs per item in his register, sometimes just a single one, next to each term. His register is purely a list of entry points, not a tag list.
The register was only a list of possible entry points to the biggest and most important clusters of notes.
In contrast, referring to an atomic note is unambiguous: when you reference it, you will know what the ‘thought’ is. There should be no room for guesswork. That is what the rule of atomicity means: Make sure that the layer of content and the boundaries between notes match and are well defined. Then and only then can it be a reference to an address identical to referencing a thought.
The difference between knowledge and information, in practice, is quite simple. Information could be summarized in one sentence most of the time. Most of the time, it is “dead”. Information just *is*.
Sometimes, however, you will refer to other Zettel as your source of inspiration. In that case, you base your thoughts on something you have already processed in the past. You reference the Zettel by linking to it via the ID, connecting the new to the old.
If you have no reference at all, perhaps that means you wandered through the forest and had a sudden insight about the true nature of the world. In that case, you don’t need to do anything in this reference section. If no reference is given in a Zettel, it is your own thought by default.
To make the most of a connection, always state explicitly why you made it. This is *the link context*. An example link context looks like this: [![](https://zettelkasten.de/introduction/202010271850_link-context.png)](https://zettelkasten.de/introduction/202010271850_link-context.png)
This type of connection is where one of the leading knowledge creation mechanisms of the Zettelkasten is found: The meaning of the link, the *why* of the link, is explicit. The explicit statement of the why is created knowledge. If you just add links without any explanation [you will not create knowledge](https://zettelkasten.de/posts/understanding-hierarchy-translating-folgezettel/). Your future self has no idea why he should follow the link. One might think *now* that the links are placed for a good reason. However, if you create a web of thoughts where you cannot be confident that following a link will lead you to something meaningful, then surfing your own thoughts will give you a feeling of disappointment. Your future self will judge its past self (you!) as unreliable.
Not each relevant Zettel was listed for each keyword. Only the most central Zettels that served as entry points to a topic.
Also, Luhmann had [hub notes](https://zettelkasten.de/posts/zettelkasten-hubs/). These are Zettels that list many other places to look at for a continuation of a topic. Luhmann’s Zettelkasten posed a severe challenge in getting to all the relevant parts of a search, especially compared to a digital Zettelkasten.
Whenever I write a new Zettel on the Zettelkasten Method, I make sure that I place a link to it on this Structure Note, or on a Structure Note that is itself referred to by the main Structure Note on the Zettelkasten Method.
To recap: To create Zettels about the relationship between other Zettels, is called a Structure Note. The practice of creating Structure Note will further train your ability to deal with general patterns of knowledge. Capturing the results in your Zettelkasten so they will be available for later use.
Actually, one more note about making way too many icons for clients to choose from. To automate this a little bit, I set up a Photoshop document that had a smart object for the glyph, and a variety of backgrounds. This wasn’t necessarily anything we presented to our clients, but it was a great tool for us to see if any color or style jumped out as something we should explore further.
Digital products have one crucial disadvantage over atom-based products and services: Intangibility. Apps live on your phone or computer. No one can see them except for you. The signal message of a fitness app is the same as that of a gym membership or athletic wear (strength & fitness display), but the signal is much weaker because you can’t distribute it to anyone.
But there’s a difference to other software products: **Superhuman has signal distribution built in.** Every time you send an email via Superhuman, your recipient will notice a little “Sent via Superhuman” in your signature. In a similar fashion, apps like Strava use their built in social networks as a signal distribution channel for their premium subscriptions. Users who have upgraded get a little premium badge and appear in exclusive premium leaderboards.
A social network like Path attempted to limit your social graph size to the Dunbar number, capping your social capital accumulation potential and capping the distribution of your posts. The exchange, they hoped, was some greater transparency, more genuine self-expression. The anti-Facebook. Unfortunately, as social capital theory might predict, Path did indeed succeed in becoming the anti-Facebook: a network without enough users. Some businesses work best at scale, and if you believe that people want to accumulate social capital as efficiently as possible, putting a bound on how much they can earn is a challenging business model, as dark as that may be.
Writing on small cards *forces* certain habits which would be good even for larger paper, but which I didn’t consider until the small cards made them necessary. It forces ideas to be [broken up into simple pieces](http://www.dansheffler.com/blog/2015-08-05-one-thought-per-note/), which helps to clarify them. Breaking up ideas forces you to link them together explicitly, rather than relying on the linear structure of a notebook to link together chains of thought.
Once you’re forced to adopt a linking system, it becomes natural to use it to “break out of the prison of the page” -- tangents, parentheticals, explanatory remarks, caveats, … everything becomes a new card. This gives your thoughts much more “surface area” to expand upon.
![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1568584395/Zettelkasten_Svg_1_iuqopd.svg)
*Don’t get too caught up in what address to give a card to put it near relevant material. A card can be put anywhere in the address system.* The point is to make things more convenient for you; nothing else matters. Ideally, the tree would perfectly reflect some kind of conceptual hierarchy; but in practice, card 11c might turn out to be the primary thing, with card 11 just serving as a historical record of what seeded the idea.
In 2023, the scene is very different. Best practices in *most* forms of software and services are commodified; we know, from a decade plus of market activity, what works for most people in a very broad range of contexts. Standardization is everywhere, and resources for the easy development of UIs abound. It’s often the case that what the executives or PMs or engineers are imagining for an interface is *fine*, perhaps 75% of where it could be if a designer labored over it, and in some cases more. It’s also the case that if a designer adds 15% to a design’s quality but increases cycle time substantially, is another cook in the kitchen, demands space for ideation or research, and so on, the trade-off will surely start to seem debatable to *many* leaders, and that’s ignoring FTE costs! We can be as offended by this as we want, but the truth is that the ten millionth B2B SaaS startup can probably validate or falsify product-market-fit without hiring Jony Ive and an entire team of specialists.
Indeed, even where better UIs or product designs are possible, we now deal with a market of users who have developed familiarity with the standards; that 15% “improvement” may in fact challenge users migrating or switching from other platforms, or even just learning to use your software having spent countless hours using other, unrelated software.
A well-designed mind map is an overview of the experience that a product team is going to offer to the end user, and this overview helps designers to keep track of the most critical aspects of the interaction (such as what users will try to do in an app).
Are you mapping a current state (how a product works currently) or the future state (how you want it to work in the future)? Depending on the answer, you will build your map based on the design hypothesis (if you’re mapping the future experience of the product) or user research (if you’re mapping the current experience).
There is a simple technique that can help you to find all possible scenarios of an interaction. Use the, “As a user, I want to [do something]” technique. “Do something” will describe the action, and this action will be a candidate for the nodes of your mind map. But, remember that you need to focus on user needs, not features of your product.
The central object can be a feature of your product that you want to learn more about, or a specific problem to solve. All other objects will be subtopics of that starting point.
In essence, first principles thinking is breaking problems down into fundamental truths and constantly questioning what you think you know.
However, if the feature doesn’t meet the users needs then it won’t get used. If it doesn’t get used, it won’t provide much value to either the business or the user.
I think it’s important to reason from first principles rather than by analogy. The normal way we conduct our lives is we reason by analogy. **With analogy** we are doing this because it’s like something else that was done, or it is like what other people are doing. **With first principles** you boil things down to the most fundamental truths…and then reason up from there.”
The regimented daily pages notetaking routine made everything worse. It turned the writing process into a multi-year death march, filling the folders of my hard drive with unusable nonsense that I didn’t believe in then and don’t believe in now.
The job of *notes for creativity* is to: • Generate ideas in a structured way through research and sketching. • Preserve those ideas. • Explore the ideas until they have gelled into a cohesive plan or solved a problem.
The job of *notes for knowledge* is to: • Extend your memory to help you keep track of useful information (client data, meeting notes, references). • Connect that information to your current tasks or projects so that you can find it when you need it.
The job of *notes for understanding* is to: • Break apart, reframe, and contextualise information and ideas so that they become a part of your own thought process. • Turn learning into something you can outline in your own words.
Notes for creativity tend to favour loosely structured workspaces. Scrivener and Ulysses probably come the closest, though, in practice, I have doubts that either of them is loosely structured *enough*.
Notes for knowledge favour databases, ‘everything-buckets’ (apps that expect you to store *everything* in them), and hypertextual ‘link-everything’ note apps.
Notes for understanding tend to favour tools that have powerful writing or drawing features (which you favour will depend on your skill set and comfort).
Knowledge bases become too rigid to serve as workspaces for creativity. The creative spaces are too loosely structured to work well as knowledge bases. You can integrate writing and drawing tools in either, but that serves notetaking for understanding only up to a point. Most knowledge bases preserve too much detail and context, which gets in the way of reframing and contextualization. And too fully featured writing or drawing tools could make the creativity tools too complex to use.
Explorers poke through the unknown, experimenting, trying many little dead-ends. Explorers meander, constantly changing directions based on hunch, mood, and curiosity. **Explorers are hard to follow.** It’s better to let them wander alone, then hear their tales. Explorers occasionally find a great place that would make a better home for many people. So that makes a job for a leader. Leaders are easy to follow. Leaders say, “Here’s where we’re going. Here’s why this will improve your life. Here’s how we’re going to get there. Let’s go.” Leaders sell the dream. Leaders describe the destination clearly and simply so it’s easy to understand and repeat. Even someone in the back of the pack, that can’t hear the leader, can follow along. **Leaders go in a straight line.** Leaders simplify. Explorers are bad leaders.
After a year of work, the CEO greenlighted a new type. We were ready to launch. However, someone in the product organization started demanding to A/B test the new typeface, which was already greenlighted. So we set up the A/B test. iOS and Android came back neutral; the Web came back slightly negative. We hypothesized that the new font added slightly longer load times because people didn’t have it cached. We tried to load both fonts to everyone and display the new font only to some to control the load time variable. We realized the A/B testing platform had a bug, and the experiment was not running properly. Our engineers spent weeks trying to debug. We couldn’t find any reason why the new typeface would perform worse. We developed it to make it more legible and work better in smaller sizes. We eventually launched the typeface regardless of the A/B testing. This moment made me write off A/B testing as a cargo cult or a way to avoid making decisions. We wasted weeks testing something where the only outcome was to launch it anyway.
Linear is a high-frequency product, similar to email clients, with daily or multi-short sessions. It’s a heavy front-end product, and most actions involve managing things in the UI. You need to make it fast and eliminate friction as much as possible. A tool like Linear is about communicating and coordinating work. Getting people to use the tool requires making it as frictionless as possible and, ideally, something they want to use instead of having to use it.
**Karri:** We do, and it’s a constant tension between our thinking and customer feedback. Since there are many existing tools, some of the feedback comes from the fact that people are used to things, not necessarily that they are the right way to do things. We internally talk about this tension as art and science. The science part is your research to understand the customer’s problems. The art part is you use your intuition and judgment to build. I’d say Linear is maybe 80% of our intuition and 20% of what people have asked for.
We also often default to the most constrained approach because it uncovers more reasons why the solution is too constrained. You should set constraints, see if people are hitting walls with them, and expand to find the optimal solution. If you start with the most flexible or expansive solution, you don’t get the feedback, and your users might use the feature in unintended ways.
If I asked you what music you like, the chances are the answers will be sporadic and unorganized—whatever is top of mind. If I instead asked, "who are the top five musicians of all time?" The ordered list of 1-5 forces critical thinking and ranking value vs. an unordered list. Creating a list is one of the simplest ways to build taste, debate, and put your opinion out there. Creating and publishing lists makes you exert your point of view on what's important. Whether it's a Top 10 year in review or the Mount Rushmore of Los Angeles Lakers players, it's human nature to rank.
Stephan Ango
Read 1 highlight
The hybrid path means developing expertise in two or more distinct areas. Having several specialities allows you to see patterns that no one else can see, and make contributions that no one else would think of. **The world needs more hybrid people.**
The T-shaped hybrid path is one that many curious people follow. You grow your skillset and experience in areas that are adjacent to your dominant expertise. For example engineering and design, or singing and dancing.
Hybrid people are important for the same reason that [composite materials](https://en.wikipedia.org/wiki/Composite_material) and [alloys](https://en.wikipedia.org/wiki/Alloy) are important. From [Wikipedia](https://en.wikipedia.org/wiki/Composite_material): > These constituent materials have notably dissimilar chemical or physical properties and are merged to create a material with properties unlike the individual elements. By becoming a hybrid, you can become greater than the sum of your skills. By becoming a hybrid you can choose how you want to be unique. Countless unique combinations are available to you.
Motivation, broadly speaking, operates on the erroneous assumption that a particular mental or emotional state is necessary to complete a task.
Discipline, by contrast, separates outwards functioning from moods and feelings *and thereby ironically circumvents the problem by consistently improving them*.
**Successful completion of tasks brings about the inner states that chronic procrastinators think they need to initiate tasks in the first place.** Put in simpler form, **you don’t wait until you’re in olympic form to start training. You train to get into olympic form.**
There is another, practical problem with motivation. It has a tiny shelf life, and needs constant refreshing.
By contrast, discipline is like an engine that, once kickstarted, actually *supplies* energy to the system.
In summary, motivation is trying to feel like doing stuff. Discipline is doing it even if you don’t feel like it. You get to feel good *afterwards*. Discipline, in short, is a system, whereas motivation is analogous to goals. There is a symmetry. Discipline is more or less self-perpetuating and constant, whereas motivation is a bursty kind of thing.