GMoromisato 14 hours ago

I think Sinofsky is asking a question: what does the future look like given that (a) writing is thinking but (b) nobody reads and (c) LLMs are being used to write and read.

It's that (already) old joke: we give the LLM 5 bullet points to write a memo and the recipient uses an LLM to turn it back to 5 bullet points.

Some plausible (to me) possibilities:

1. Bifurcation: Maybe a subset of knowledge workers continue to write and read and therefore drive the decisions of the business. The remainder just do what the LLM says and eventually get automated away.

2. Augmentation: Thinking is primarily done by humans, but augmented by AI. E.g., I write my thoughts down (maybe in 5 bullet points or maybe in paragraphs) and I give it to the LLM to critique. The LLM helps by poking holes and providing better arguments. The result can be distributed to everyone else by LLMs in customized form (some people get bullet points, some get slide decks, some get the full document).

3. Transformation: Maybe the AI does the thinking. Would that be so bad? The board of directors sets goals and approves the basic strategy. The executive team is far smaller and just oversees the AI. The AI decides how to allocate resources, align incentives, and communicate plans. Just as programmers let the compiler write the machine code, why bother with the minutiae of resource allocation? That sounds like something an algorithm could do. And since nobody reads anyway, the AI can direct people individually, but in a coordinated fashion. Indeed, the AI can be far more coordinated than an executive team.

  • Swizec 13 hours ago

    > 1. Bifurcation: Maybe a subset of knowledge workers continue to write and read and therefore drive the decisions of the business. The remainder just do what the LLM says and eventually get automated away.

    This already happens. Being the person who writes the doc [for what we wanna do next] gives it ridiculous leverage and sway in the business. Everyone else is immediately put in the position of feedbacking instead of driving and deciding.

    Being the person who feedbacks gives you incredible leverage over people who just follow instructions from the final version

    • p_v_doom 6 hours ago

      > Being the person who writes the doc

      If only people read them. Everyone is so pressured by made up goals, fake deadlines and horrible communication, that nobody ever wants to read more than a bullet point or two.

      • skydhash 5 hours ago

        I think GP is talking about the docs at source of the action, not the reports that came after.

  • p_v_doom 6 hours ago

    > The AI decides how to allocate resources, align incentives, and communicate plans

    Not gonna happen. The resources, incentives and plans even are almost exclusively about communication and people work. You can run as many optimization algorithms as you want, but an organization is ultimately made of people, and even in small startups the complexity and nuance involved in resource allocation planning and communication is too big for anything that is not a mega-super-AI to handle. Hell, in most companies these parts are incredibly dysfunctional now ...

  • didericis 13 hours ago

    4. Degradation: Humans with specialized knowledge lose their specialized knowledge due to over reliance on AI, and AI degrades over time due to the lack of new human data and AI contaminated data sets.

    • bugbuddy 13 hours ago

      5. Society collapses in an Idiocratic fashion.

      • hshdhdhj4444 3 hours ago

        Even if AI isn’t sentient and/or thinking, if AI can functionally emulate everything humans can do, I think that will raise significant existential questions about our future.

      • volemo 12 hours ago

        6. PROFIT?

        • coldtea 8 hours ago

          Short term, sure.

  • oldge 14 hours ago

    The executives in example three seem redundant and a cost center we can eliminate.

    • bugbuddy 13 hours ago

      Everyone without significant capital is redundant and can be eliminated.

    • soco 9 hours ago

      You might be missing that exactly the executives are the ones who can and will eliminate people. Why should they eliminate themselves? They'd rather use the same AI to invent a reason for them to stay.

  • makeitdouble 12 hours ago

    > It's that (already) old joke: we give the LLM 5 bullet points to write a memo and the recipient uses an LLM to turn it back to 5 bullet points.

    This is already how we moved from stupidly long and formal emails to Slack messages. And from messages to reactions.

    I understand not every field went there, but I think it's just a matter of time we collectively cut the traditional boilerplate, which would negate most of what the LLMs are bringing to the table right now.

    > 2. Augmentation

    I see it as the equivalent of Intellisense but expanded to everything. As a concept, it doesn't sound so bad ?

  • coldtea 8 hours ago

    >Transformation: Maybe the AI does the thinking. Would that be so bad? The board of directors sets goals and approves the basic strategy.

    We've already lost the war if we only consider this from a busines aspect.

    AI "doing the thinking" will cover all of society and aspects of life, not just office job automation.

  • andai 13 hours ago

    The other day I gave GPT a journal entry and asked it to rewrite it from the POV of a person with low Openness (personality trait). I found this very illuminating, as I am on the opposite end of that spectrum.

evolve2k 16 hours ago

The sci-fi movie Brazil (nothing much to do with the country), is set in a beauracratic dystopic future and at the start of the movie a literal real world bug falls into “the machine that never makes mistakes”. The error plays out over the course of the movie having a somewhat (negative) butterfly effect.

I feel the movie well captures the tone of the current moment.

Worth a watch.

  • dexwiz 15 hours ago

    For anyone who wants to watch, there are a few endings in similar flavors to the Blade Runner endings.

    • cjbgkagh 14 hours ago

      I guess like the Penfield Mood Organ you can dial how you want to feel. But you probably shouldn’t pick ‘good’.

      • nudgeOrnurture 7 hours ago

        you should probably let a machine pick for you, one that is aware of the choices made for others

karaterobot 17 hours ago

The title made me think this was going to be about the mental consequences of outsourcing writing to AI. In fact, the article is completely about people not reading documents. Corporate documents to be exact. His examples are from the 00s, so the problem has absolutely nothing to do with AI.

Heck, I, too, have noticed that nobody reads anything: what does that have to do with AI? At least with AI, people could read a summary of his 30 page corporate memo and ask it questions.

I repeat: that people do not read is not a new problem, nor is it made one iota worse by AI.

  • gleenn 17 hours ago

    I am firmly a believer very few people read anything. They don't read long things as much as they don't even read short ones. One of the things I always thought was funny was having Product Managers see there were problems with UI, then I would get tickets to add text near the problems. It always crackme up because if users didn't even barely read the button they were clicking they why would they read a paragraph nearby.

    • wlesieutre 16 hours ago

      In school they were really big on the 5-paragraph persuasive essay format. I guess because it teaches you to think through an argument and present it to someone.

      In practice, I find that if I don't format something as a bulleted/numbered list, nobody is going to look at it.

      • Herring 16 hours ago

        I'm sympathetic. It's like if code isn't color-coded and properly indented, I'm just not reading it.

      • snoman 12 hours ago

        > In practice, I find that if I don't format something as a bulleted/numbered list, nobody is going to look at it.

        I’m one of those people, if I’m honest. If I’m reading for work, I want the minimum words necessary to get the point across. I’m looking for information, not a story.

        If I’m reading for enjoyment though, it’s another thing entirely.

      • tonyedgecombe 10 hours ago

        I was always surprised when my customers read past the first item in a list.

      • bitwize 12 hours ago

        Now we're in the era of:

        Alice: Hey ChatGPT, please take this bullet list of points and turn it into a polite, but assertive and persuasive, email to Bob.

        Bob: Hey, ChatGPT, please take Alice's email and turn it into a succinct list of bullet points.

    • ozim 10 hours ago

      I don’t have the link but somewhere in last 3 years I read article found on HN.

      It was about the fact that lots of people actually struggle reading.

      Bad parts are that they don’t even know or they won’t admit it. Then they make people who are fluent at reading angry because those people will expect reading a paragraph is not noticeable effort and will think everyone is fluent at reading.

  • mlinhares 15 hours ago

    Completely wild counterpoint, now that our docs are available to the AI bot, people are interacting with them more because when they ask the bot can reply, explain and then say "docs are available here" do the point i'm actually investing even more of my time in writing them.

    • sothatsit 13 hours ago

      I agree, it feels like the value of text documents has gone up immensely with AI making it much easier to make use of them. Accurate reference documentation that AI can point to is really valuable.

      Although, the next step on this ladder is going to be that people don't even double-check the facts in the original document, and just take what the LLM said as truth, which is perhaps scarier to me than people not reading the original documents in the first place...

sakesun 16 hours ago

Just watched Veritasium's "How One Company Secretly Poisoned The Planet"

https://www.youtube.com/watch?v=SC2eSujzrUY

Inventions created for convenience decades ago have now become health concerns. I wonder how AI might affect our intellectual well-being in the decades to come.

  • jojobas 15 hours ago

    It's already affecting students knowledge. Why bother with understanding something if you can ask chatgpt to do the work?

muratsu 17 hours ago

In my experience people don't read these large documents because they are not personalized/relevant. When you're writing to a large audience, you naturally assume people know the least amount possible about the subject and start from there. In a corporate setting this comes off as irrelevant or boring. I'm sure rebranding initiatives like One Microsoft, Copilot, or Office 365 makes things simpler for executives but employees are left confused. The memo usually mentions future efficiency gains or synergies but will omit why this brand change is needed. Surely if you're sending a memo to 100k people, it makes sense to not talk about negatives (a good example of this is politicians) but at that point the value of memo is also very low. This may come off as odd but short format videos seem to work much better at large scale. Perhaps the future of communication is really just lots of easy to consume/repeated content.

  • jillesvangurp 13 hours ago

    Reading long form text is a big commitment in time. In a business context that's simply not appropriate. I always joke about documentation as write only content. It's the type of thing you are asked to write that then doesn't get used or read. I've more than once gotten the question whether I could produce a diagram or some sort of documentation only to realize later that after the thumbs up I got on delivery, nobody was actually doing anything with the document.

    Here on HN, short comments are more appreciated than longer comments. People are skimming, not reading. The ability to say a lot with very few words is what is appreciated the most.

    That's nothing new btw. As Mark Twain once wrote: “I didn’t have time to write a short letter, so I wrote a long one instead.”

    Using LLMs to rephrase things more efficiently is a good use of LLMs. People are getting used to better signal to noise ratios in written text and higher standards of writing. And they'll mercilessly use LLMs to deal with long form drivel produced by their colleagues.

    • p_v_doom 6 hours ago

      Its also not just business text. People dont read period. Sometimes they dont even watch. Sometimes I feel that reading the docs is a super power. Reading an article, watching the occasional talk? Its enough to take over the world... or get to a point where you can exactly predict what is going to happen, know how to fix it, but dont have the power to do it, so you are doomed to watch as it all crumbles down yet again, just like you foretold last time, and the time before that and the one before that...

    • voidhorse 13 hours ago

      > I always joke about documentation as write only content. It's the type of thing you are asked to write that then doesn't get used or read.

      That's not actually true. It may be true now when everyone still has context, but if you built a sound system that will outlast your own contributions to it, the documentation becomes invaluable.

      > People are skimming, not reading.

      Yes. The cause of much suffering and misery in the modern world.

      > And they'll mercilessly use LLMs to deal with long form drivel produced by their colleagues.

      otoh, they also use them to generate more noise and drivel than we ever imagined possible. When it took human effort to pump out boring corp-speak, that at least put a cap on the amount of useless documentation and verbiage being emitted. Now the ceiling has been completely blown off. People who have been incapable of even crafting a single sentence their entire lives can now shovel volumes of AI-generated garbage down our throats.

Herring 16 hours ago

Idk, I'm more optimistic than the author.

I'm currently using a LLM to rewrite a fitness book. It takes ~20 pages of rambling text by a professional coach // amateur writer and turns it into a crisp clear 4 pages of latex with informative diagrams, flow charts, color-coding, tables, etc. I sent it out to friends and they all love the new style. Even the ones who hate the gym.

My experience is LLMs can write very very well; we just have to care.

Hubert Humphrey (VP US) was asked how long it would take him to prepare a 15 minute talk: "one week". Asked how long to prepare a two hour talk? "I am ready right now".

  • kaushikt 15 hours ago

    this. I am almost addicted to dropping long voice notes (pronounced rambling) and LLMs do such a great job at creating and managing these notes. I can then convert that format into anything.

    although, I agree with the author since many emails and messages onlinkedin i get these days are just long post shits by AI. I am not reading them anymore but it's some other ai summarising ebcause no human talks or writes like basic ai prompting does. so so difficult to read that

  • ramesh31 15 hours ago

    >My experience is LLMs can write very very well; we just have to care.

    My experience is that people who think this are really bad writers. That's fine, because most human writing is bad too. So if your goal is just to put more bad writing into the world for commercial reasons, then there's some utility in it for sure.

    • Herring 15 hours ago

      Then in the A/B test you're the 1 fail out of 7 tries, everyone else loved it, and you haven't even looked at it. I can live with those results. Lesson is don't always publicize the LLM help.

      • the_af 13 hours ago

        > everyone else loved it

        Most people are very bad readers, too.

        For example, most of my coworkers don't read books at all, and the few that do, only read tech or work-related books. (Note that most don't even read that).

        • Herring 13 hours ago

          That doesn't matter. It's like code syntax highlighting -- A good tool benefits everyone. You can pride yourself on doing it the hard way, like reading monochromatic code, but the world has already moved on.

          FWIW I think there's a kernel of truth when you worry about reading skills, but 1) that's a longer trend involving all kinds of political and cultural issues, and 2) right now I'm happy with any improvement to technical communication. I think people might read more if books were better written & more respectful of their time.

          • the_af 4 hours ago

            > That doesn't matter. It's like code syntax highlighting -- A good tool benefits everyone.

            I obviously disagree: it does matter.

            "Most" people love your AI-assisted writing because they have no taste; or rather, they are not good at reading. Yes, one must become good at reading, it must be exercized. Reading and writing -- use it or lose it; the brain is a metaphorical "muscle".

            > I'm happy with any improvement to technical communication

            If you mean business emails, corporate communications, even teamwork related things, I agree. That's mostly boilerplate writing anyway, and everyone understands it's the "junk food" of writing and reading, so why not automate it? Like the person you replied to said, very aptly:

            > So if your goal is just to put more bad writing into the world for commercial reasons, then there's some utility in it for sure.

            If you instead mean creative writing -- as I thought you were also claiming -- I strongly disagree.

    • api 15 hours ago

      As with visual art, AI is replacing humans when it comes to creating filler and background material.

      I haven’t seen many examples of anything in either visual or prose arts coming out of an AI that I’ve liked, and the ones I have seen seem like they took a human doing a lot of prompting to the point that they are essentially human made art using the AI as a renderer. (Which is fine. I’m just saying the AI didn’t make it autonomously.)

  • wvenable 13 hours ago

    > 4 pages of latex with informative diagrams, flow charts, color-coding, tables, etc.

    What tool are you using for this?

    I too have used LLM to do writing that, frankly, I wouldn't have done without it. Often I don't even take what it says but it helps to get the ideas out in written form and then I can edit it to sound more like how I want to sound.

    • Herring 13 hours ago

      Gemini pro + overleaf (learned latex from a long stint in grad school). Cheers mate.

    • intended 12 hours ago

      Production became easier -

      But your consumer changed as well.

      • wvenable 21 minutes ago

        When I wasn't producing anything, I didn't have any consumers. But it's a fair point; I think a lot of effort needs to go in to not sound like an AI. I hate low-effort AI content in videos, text, etc. It still takes effort but I find it much less effort than sitting at an empty page.

  • southernplaces7 15 hours ago

    Might I ask what LLM you use for it to do all of that, including the visuals, so neatly?

    • Herring 15 hours ago

      Gemini pro. The flow chart, color-coding, tables etc are just latex. The illustrations are picked off the web. I suspect I might have to hire a professional photographer eventually.

      • southernplaces7 12 hours ago

        >The flow chart, color-coding, tables etc are just latex. The illustrations are picked off the web

        Thanks for replying so quickly! Just to clarify, what do you mean by latex?

        So you don't use AI-generated illustrations. Those are real.

gchamonlive 14 hours ago

I think it's the type of thing that the mere awareness of it counts a lot to counterweight the problem.

It's like when you are growing up and a certain type of behaviour that would work for socializing when you're 14 years old suddenly doesn't work anymore when you are 21. You learn about it when someone you trust brings you attention to it and suddenly you have the opportunity to reflect and change your behaviour.

The thing with AI that I really fear is the same with mind-altering drugs like Adderall. In some places you just can't afford the luxury of not using it without losing competitiveness (I think, never used it but I know of people that do with regularity).

So maybe we don't want to not read what we write, but sometimes there is a middle manager making you do it. Then it's a problem of context that awareness in itself doesn't help, maybe only in the long run.

  • edg5000 11 hours ago

    In what industries do you see this amphetamine use and what region? From TV I know maybe some bankers do it. Personally I really need my cafeine, but never heard about people taking amphetamime for work. Outside of work is another thing though.

    • gchamonlive 5 hours ago

      Undergrads, doctors, lawyers... All these fields that people end up working over 60 hours a week, most of it requiring mental concentration.

  • MoltenMan 11 hours ago

    > I think it's the type of thing that the mere awareness of it counts a lot to counterweight the problem.

    I'm unfortunately not too optimistic about this. There are plenty of things that are bad for you that everyone is aware of: not exercising, eating junk food, spending all day online, etc. But so many people do these things anyways; the human mind is incredible at cheating itself to make things easier on itself, and I don't think this is an exception.

    • gchamonlive 5 hours ago

      Great point. I think it's different from AI because there isn't a large industry that spends lots of resources on advertising like the sugar industry does to deliberately make people consume it more even though it's bad for them. There is a large industry behind AI, one of the greatest, it's just that for now it's different than these other more traditional industries when it comes to advertising.

      And I think we need to fight these kinds of bad incentives in society, but at the end of the day if you don't want to exercise nobody can make you. If you want to delegate all your work to AI, nobody can stop you.

      I'm more concerned about those that understand the problem, want to change but feel overwhelmed.

  • caseyohara 13 hours ago

    > mind-altering drugs like Adderall

    This is strange to me. You could give me 100 chances to guess which “mind-altering drug” you are thinking of and Adderall wouldn’t cross my mind. Amphetamine is a stimulant; it’s plainly not mind-altering in the way that psychedelics are. Adderall is mind-altering in the same way that caffeine is. Which is to say, it isn’t.

    • saulpw 13 hours ago

      Yes, Adderall is mind-altering, even if it's not as profound as psychedelics. Caffeine is also mind-altering to a lesser degree.

    • bowsamic 11 hours ago

      You either have ADHD and so it affects you differently or you’ve never tried it. Anyone who has tried amphetamines knows it’s a completely different story to caffeine

mayukh 14 hours ago

I use AI a ton for writing emails and other corporate stuff, I have no problems there, it structures and presents them well enough and quickly that it saves me a ton of time.

Where I am conflicted is creative writing -- its something I have been interested in but never pursued...and now I am able to pursue it with AI's help. There is a degree of embarrassment when confiding to folks, that yes a piece was AI assisted... see here by what I mean: https://humancurious.substack.com/p/the-architect-and-the-cr...

  • photios 12 hours ago

    > I use AI a ton for writing emails and other corporate stuff, I have no problems there, it structures and presents them well enough and quickly that it saves me a ton of time.

    My manager uses AI when generating docs and emails. I think he does it because English isn't his native language and he goes for the "polished" look.

    Frankly, I prefer the grammar errors and the authentic version. The AI polish is always impersonal and reeks of low effort.

    I also love how everyone thinks coworkers don't notice the AI touch... of course we do.

  • apparent 13 hours ago

    Do you find yourself learning from how AI structures your bullet points or rewrites messages?

    I sort of feel like it would blunt the downsides of AI rewriting everything if it had to explain why it was making all the changes. Being told the rationale would allow users to make better decisions about whether to accept/reject a change, and also help the user avoid making the same writing mistakes in the future.

  • audinobs 5 hours ago

    I spend hours a day on emails and I don't get this.

    Whatever email I send would just be the prompt, otherwise the LLM is just adding words that don't need to be there without reading my mind.

  • indigodaddy 13 hours ago

    Do you feel weird that your coworkers must largely know that you use AI to communicate with them? I'd feel weird doing it, I know that much, so I've never even contemplated using AI for emails/communication.

    • xwolfi 11 hours ago

      I'm in a new budding relationship and we're exchanging letters. There is no way in Earth I touch chatgpt, but if I write well, how believable is it ? Is she ?

      I hate this thing, it's so soul-less.

jart 16 hours ago

Putting aside the people who need LLMs as a prosthetic, the only writers who are asking AI to write lengthy prose for them are the ones who on that occasion didn't have anything important to say in the first place. Maybe they work a fake job where they're required to do ritualistic writing, similar to the medieval scribes who laboriously copied books by hand over and over again. Now, thanks to AI, they're liberated from having to toil through that lengthy process. So, yes, writing is thinking. It's also willpower. But if the purpose for it didn't matter in the first place, then nothing is actually lost if you stop doing it. You just won't get punished.

raincole 16 hours ago

The title (subtitle, to be precise) is almost completely unrelated to the content.

Title is about AI. Content is just rambling on how people dislike reading business reports.

If writing is thinking I think the author is having a trouble thinking coherently.

  • gavmor 16 hours ago

    I was hoping he'd touch on how writing is "embodied" or "extended" cognition, ie how it allows for manipulation and reorganization of ideas in ways that are functionally similar to what occurs within the mind a la Clark and Chalmers' “Otto’s notebook” thought experiment (in which an Alzheimer's patient has written all of his directions down in a notebook to serve the function of his memory).

    Or how the thoughts we have as we are writing shape our understanding, and so we come out with not only a written composition, but a new frame of mind; AI generated writing allows us to preserve our frame of mind—for better or worse!

    There is something to be said for the ways in which coding, too, is an exercise in self-authoring, ie authoring-of-the-self.

irjustin 14 hours ago

To me, the short answer is we collectively stop learning.

I'll take the flip side of the argument and say AI allows humans to get back to raw, first-principals research and writing.

In many ways, middle men of literature (i.e. re-bloggers, article writers, etc) are moot. Groups who don't actually add value but have insane amount of ads on the page. This pushes people to actually write original content.

Is this perfect? no way. Is it dangerous? yes.

There are so many problems, but for better/worse, too many of us have changed the way we operate. It's here to stay.

bravesoul2 15 hours ago

Then nothing is being learned by AI (yet due to LLM being prefabbed but not dynamic) or human.

I dont use LLMs at all for writing. Mainly for checking stuff and the most boilerplate of code.

aaronbrethorst 13 hours ago

Fun fact: Socrates thought writing would lead to forgetfulness. https://newlearningonline.com/literacies/chapter-1/socrates-...

  • voidhorse 13 hours ago

    And he was right. Poets and bard used to memorize entire epics. Writing changed this not by augmenting their abilities, but by changing expectations: it became acceptable to read form a written record, rather than from memory.

    My memory gets exercised a lot less frequently than it would need to without writing.

    But memory is also not thinking. It is a component in thinking, but it is not thinking itself. Discourse, arguably, though, whether in natural or symbolic language is thinking. If we offload all of that onto machines, we'll do less of it, and yes our expectations will change, but I actually think the scenario here is different than the one Socrates faced and that the stakes are slightly higher—and Socrates wasn't wrong, we just needed internal memory less than we thought once external memory became feasible, as cool and badass as it may seem to "own" Socrates in retrospect.

  • blondie9x 13 hours ago

    Drastic oversimplification of what is meant here. The core take away is writing can give the illusion of wisdom without the reality of it. True wisdom must be cultivated internally, through thoughtful interaction and lived experience—not simply consumed passively through text.

yunwal 15 hours ago

This is on par with “when did you stop beating your wife?” I wish people would stop thinking logical fallacies are clever.

anonzzzies 16 hours ago

People do not read, not even short things. Never did. Luckily I am old and senior enough, that when I receive a Teams invite to 'go over the email/doc together' to just say 'nope' and decline as I know they basically want me to read it to them.

  • RedShift1 15 hours ago

    Sometimes people ask me "how do you know how to do that?" and the only thing I did was read the manual...

jasonthorsness 17 hours ago

"My entire career has been defined by the reality that people in business don’t really read."

The myth is that the culture at Amazon is contrary to that. Is that true?

I definitely feel like writing clarifies my own thoughts and helps me find inconsistencies or other problems with what I think I want to say. If everyone is letting the LLM do most of the work writing and reading, their thought process and eventual conclusions are definitely going to be strongly influenced by that LLM. A great justification for the alignment efforts I guess, or a great opportunity for to propagandists.

  • malfist 16 hours ago

    It is true at Amazon. Nobody is expected to come to the meeting having read the document. The first X minutes of the meeting is dedicated to reading the document in silence. Thus, everyone winds up reading the document. Or at least most, some multitask but they weren't going to pay attention anyway

  • jasonthorsness 17 hours ago

    Ahah I didn't realize that this article was by Steven Sinofsky from Microsoft; I used to work in his organization (Windows) for many years. I'm definitely one of the those he rightly accused of not reading the long documents he would send (at least some of them) :P

dzonga 10 hours ago

every once in a while everyone should watch Jonathan's Blow: Collapse of Civilization [0]

by not reading and going through docs - we cut out our critique ability as we just ingest fully what A.I gives us. by not writing we don't exercise thinking - question our assumptions etc. since what AI writes will mostly be positive not negative.

[0]: https://www.youtube.com/watch?v=ZSRHeXYDLko&pp=ygU1cHJldmVud... and just like that society stops thinking. stops creating.

then civilization collapses.

kelseyfrog 17 hours ago

Writing is not thinking just as calculating is not "doing math".

We've invented the equivalent of a "calculator for words" and we're going through the growing pains of discovering that putting words together is a separate activity from thinking. We've never needed to conceptualize them as separate activities until now so we don't have the conceptual distinction and language to even describe them that way.

  • voidhorse 13 hours ago

    I would say: sort of.

    I think proving theorems is a better analogy than rote calculation—like the writing process, it is creative.

    While mathematicians don't sit down and smash random symbols together to concoct proofs, many of them will stress the importance of good notation. Chewing on notation a bit can help reveal connections in the hunt for a proof.

    When people claim that "writing is thinking" they mean something similar. The free-associative process of writing out our thoughts can help bring an initial clarity that is difficult to achieve without an external medium. Most of the time, we ought to polish those thoughts and connections just as mathematician ought to polish his proof sketch, but there is some extent to which the actual activity does in fact make up a significant portion of the thinking process. This is why offloading all of that activity to LLMs is dangerous. You can judge the string of propositions produced by an LLM, but you haven't thought them.

Amaury-El 15 hours ago

Writing has always been my way of thinking things through. Sometimes I use AI to help with writing, and it does save effort. But over time, I’ve noticed that many of the ideas I should’ve taken the time to untangle myself just get skipped. The words are there, but it feels like I’ve missed the chance to really have a conversation with myself.

sixtyj 17 hours ago

Writing is not thinking. It’s similar to cutting through the jungle. With a pocket knife. And at the end you realize that you’re still at the very beginning. :)

Writing and reading is similar to writing and talking. Multitasking but in reality it is switching between both of them. Really fast switching, but it is not a real parallelism.

kevin_thibedeau 14 hours ago

These are the infinite monkeys at typewriters with an effective culling algorithm. There needs to be no thought to get useful output "written".

alganet 14 hours ago

Sounds like a problem! Good luck solving it.

Is the post some sort of series? I will be looking forward for the part that addresses possible solutions.

neom 15 hours ago

Lots of confusion about what Steven is saying here. I'm pretty sure his point is: A) People don’t actually read deeply in corporate and professional contexts, even pre-AI, and B) AI compounds the issue because it both creates vast quantities of content and is increasingly being relied upon to summarize that content, So: the implication is that a loop of low quality reading and writing is compounded. Basically: If writing is thinking, and people rarely read carefully even without AI, then the widespread adoption of AI, both as writer + summarizer may very well push orgs into cycles of shallow understanding and misinformation at large.

Personally: All my best business has been done in front of a whiteboard.

nikolayasdf123 14 hours ago

AI talking to AI will be the dominant form of communication

nudgeOrnurture 7 hours ago

more people turn into pure, uncritical consumers, the unthinking majority grows stronger--congratulations, general--and larger.

trends, hypes get a lot more easy reach and have more impact and there's also gonna be a lot more trends and hypes. you'll need pharmaceuticals and psychedelics to cope. I recommend investing in retreats and fill and run it with pretty, soothing, equanimous, characters/people. don't forget the petting zoo.

whale carried motions and marketing campaigns bypass psychological bias, fallacies, and emotions via pure mere exposure and literal indoctrination.

context loses relevance, and makes room for liminal but revenant, recurring, reused micro dogmas

all for quick, easy and conforming licks. like those easy 5 bullet points both workers won't remember and will have to be looked up, shall they become contextual in the rare case of contradistinctive dogma(s).

stevenhuang 15 hours ago

Then AI is thinking. Next question.

AndrewKemendo 15 hours ago

The next few decades of humans grasping for existential meaning at the level of Camus, Kierkegaard and Deleuze, and being left empty because there’s nothing is going to be interesting.

We’re already seeing the regression to the mean, which is basically fanatical clinging to myths and historicism that favors whatever period or place favors the lifestyle they personally want.

deadbabe 17 hours ago

Thinking is thinking. There is no substitute.

  • steveBK123 17 hours ago

    Yes but having to put your thoughts down in writing often forces you to think.

    • egypturnash 16 hours ago

      Putting thoughts down in writing also helps you make room for new thoughts; once it's out there on the page/screen then you can stop filling up a chunk of your working memory with it and go on to ask yourself "so what comes next because of that".

johnnienaked 17 hours ago

Writing isn't necessarily thinking. For example you can have a spreadsheet of 20 000 words and use an algorithm to randomly pick a sequence of words. No one would call that thinking, even if such a process made intelligible sentences sometimes.

"AI" is using an algorithm and statistics in the same way---it's just more accurate at making intelligible sentences than the example above. I wouldn't call either thinking, would you?

  • adamtaylor_13 17 hours ago

    That’s not at all the point the author is making. The point is people don’t read. Now we don’t even write. At least what USED to be written had to cost a human time, something relatively precious.

    Now we can churn out text at rates unprecedented and the original problem, no one reading, is left untouched.

    The author wonders what happens when the weird lossy gap in-between these processes gets worse.

    There’s lots of evidence that writing helps formulate good thinking. Interestingly, CoT reasoning mirrors this even if the underlying mechanisms differ. So while I wouldn’t call this thinking, I also don’t think reducing LLM output to mere algorithmic output exactly captures what’s happening either.

    EDIT: previous != precious.

    • johnnienaked 14 hours ago

      >Now we can churn out text at rates unprecedented and the original problem, no one reading, is left untouched.

      I think you miss my point a bit.

      Any text that can be churned out at unprecedented rates likely isn't worth reading (or writing, or looking at, or listening to), and anyone consuming this stuff already isn't doing much thinking.

      You can lead a horse etc etc

  • bluefirebrand 17 hours ago

    > No one would call that thinking

    No one should call it writing, either