phkahler 4 hours ago

Maybe employers are just believing the hype from companies like OpenAI. The student responses seemed spot on to me. They need to learn the material and how to do their job, not chat with a bot.

  • pjc50 3 hours ago

    There's a long history of schools teaching hyper-specific skills, like how to use a particular version of Word. Then the technology changes.

    Does anybody think that the state of AI is sufficiently stable that things taught to high school students will be valid by the time they enter the workforce?

    Is there even anything generalizable to teach? The big pitch for AI is that it's supposed to be able to understand natural language. Then there's a long tail of specific hacks that you insert into prompts to get particular results. Any of which could change tomorrow. It's like teaching SEO in school.

    • danielbln 3 hours ago

      The generalizable part is to not just trust blindly any information you come across, but to cross reference, disseminate, validate and evaluate. Students shouldn't be learning how to specifically use ChatGPT, but they should absolutely learn why auto regressive transformer expert systems will confidently lie, and how to avoid pitfalls while making use of the things that LLMs are actually good at. In the same way how we learned that Wikipedia alone is probably not a great source to rely on.

  • nicce 4 hours ago

    They need to learn the material at first. Then, AI is just a tool and you lose if you don’t use it.

    • eqvinox 3 hours ago

      No, they need to learn the material at first, and then, if it is beneficial for the situation, they need to learn the shortcomings and pitfalls of AI (both generically and specifically to their use case) before using it.

      You're right in that AI is a tool that might be necessary to use, but just like with a heavy industry machine tool that can chop your arm off if you use it wrong, there need to be operator training requirements.

      • pjc50 3 hours ago

        Machine tools are deterministic enough: they have interlocks and if you defeat the interlocks or put your hand in a specific space, you're in trouble. Is AI deterministic enough for this approach to safety?

    • amelius 3 hours ago

      The nice thing about OpenAI is that they are so open about their technology. They even have programs that teach students how to build their own LLMs so they never become dependent on something they cannot control.

      • jampekka 3 hours ago

        OpenAI LLMs are some of the least open.

  • danielbln 4 hours ago

    "They need to learn the material and how to do their job, not muck around with computers."

    "They need to learn the material and to do their job, not search the Internet."

    • rsynnott 3 hours ago

      Ah, yes, what a good idea, classes in how to search the internet. If I’d had those when I was in school, I would now be an expert user of AskJeeves. What a missed opportunity.

      • danielbln 3 hours ago

        There is no need to be snide and dismissive. It is absolutely necessary to learn how to find, disseminate and evaluate information. If the tool of choice is a book index,AskJeeves, Google or ChatGPT - all of these require specific knowledge to get the most out of them. Where do you acquire that knowledge?

        • pjc50 3 hours ago

          From the tool itself?

          Especially when the tools are so liable to change at zero notice. It's not the responsibility of the public education system to do product training for megacorps.

          (People who disagree with this: let's see your syllabus for AI 101 with the guarantee that all its contents will still be valid in 5 years time?)

          • danielbln 3 hours ago

            That's where you generalize and teach students higher level concepts while merely referencing the tools de joure (same as Wikipedia before).

    • lm28469 3 hours ago

      If my school focused on teaching me how to use stackoverflow instead of teaching me how to think and solve problems I'd be in deep shit now.

      Same thing for AI, it's a crutch at best, not a way of doing things, and especially not something you need to focus on while teaching

      • danielbln 3 hours ago

        These are not at odds with one another. Schools should be teaching how to research information and which tools to use and how.

        That can be literary research, Internet research and yes, ultimately if will have to be how to query language model expert system to solve various problems.

        You can copy and paste a solution from a book/SO/LLM or you can use it to understand, improve, research and solve various tasks.

DiscourseFan 4 hours ago

Its not really useful unless you're a highschool student or something. And even that's bad enough. There are definitely use cases but it will probably take like a decade or so to integrate it properly into industrial processes, and even then it will probably be limited and used in very strange ways you couldn't possibly imagine today. Like, someone is going to come up with something really fucking weird that saves a lot of time on some ultra-specific process and you won't even notice it. Meanwhile, people will expect work emails to be written with more personal expression or else they'll think an AI wrote it (I already know some people who got in trouble for using AI to write important emails and were then threatened with lawsuits for language in those emails that they themselves, of course, did not read).

  • eru 3 hours ago

    > (I already know some people who got in trouble for using AI to write important emails and were then threatened with lawsuits for language in those emails that they themselves, of course, did not read).

    Of course, the solution to that is to have an AI that scans your email for problematic language.

    • DiscourseFan 3 hours ago

      Its not that the language was "problematical," its just that the contents were more sensitive then I think they realized, and so they were held to account for the specificities of the language they used.

      • pjc50 3 hours ago

        Like the Air Canada case?

        https://www.bbc.com/travel/article/20240222-air-canada-chatb...

        You are potentially liable for the contents of your work emails. This includes if you put a stupid disclaimer at the bottom. People have gone to jail over the contents of chat. It's one thing to take notes on a criminal conspiracy and another thing to have the AI make one up and drop you in it because you didn't read the email it sent.

        • DiscourseFan 2 hours ago

          Obviously I can't talk specifics, but no it wasn't that stupid. More like general negligence on their part.

Almondsetat 4 hours ago

Perhaps AI will achieve what Excel, scripting or programming failed at?

Education is slow, people still don't know how to use a spreadsheet software or scripting language to enhance their lives/work. Students don't use Excel or python to cheat on their homework or exams, so they don't really have reasons to learn those tools.

Meanwhile, user-facing AI tools are often extremely intuitive and feel quite natural to interact with, so by the time young people reach working age they already have familiarity with what they need

  • amonith 4 hours ago

    There’s a high possibility that AI will have exactly the opposite effect on work productivity for young people, at least for this generation, because technology in companies around the world lags years or even decades behind for simple economic reasons.

    In a similar way, the overuse of simplified touchscreen-based devices makes them unprepared for working with computers and other “old tech” permeating the modern office. Many of them do not even know what files are, and their tech skills make them almost completely unhireable.

    • eru 3 hours ago

      I suspect the average tech skill is about the same as in earlier generations.

      It's just that for older folks amount of time spent with computing devices was a decent proxy for tech skills. That correlation has probably broken down.

      • amonith 3 hours ago

        I'm not sure about the average but I have 0 data. We will see in the future because the generation currently entering the workforce still used computers a bit due to tablets and phones not being that powerful. It's all about peeps currently in school.

        You need to spend time with computers to learn how to use them so if average PC usage drops I'd expect the average skill to drop as well.

  • tourmalinetaco 4 hours ago

    I truly hope so, even IT majors at my college gave me weird looks when I said I used Excel and later Python to quickly do our math homework. I was taught very explicitly in my engineering course to do so, as my teacher had an Excel sheet made for calculating hydraulic cylinder requirements which inspired me to do all the work ahead of time. “Do everything to reduce downtime”, he’d say, because downtime is lost revenue.

  • k__ 3 hours ago

    To be fair, Excel skills are tightly bound to Excel.

TrackerFF 3 hours ago

Also, the way I see it:

"AI skills" is comparable to what using a search engine was before.

You'd be absolutely amazed how many people still can't use search engines, other than the absolute bare basics of typing something into google, and giving up if the result isn't on top of page 1.

I've worked with plenty of (non-tech) people that are like fish out of water, when trying to find information. Just learning stuff like boolean opeators, searching for words in quotation marks, specifying which sites and dates to search for, is way beyond what most people know, or do.

Some goes for LLMs. There's a difference between prompts, and knowing what to ask for, and how to structure your questions.

  • salawat 2 hours ago

    Most search engines don't even actually do raw boolean searching anymore in favor of NN powered NL voodoo that takes an entire planets worth of internet and reduces it down to maybe 200 of the wrong site. All because we can't have a representative index exposed to the public anymore.

TrackerFF 3 hours ago

So a nephew of mine starter studying CS this fall, and I've helped him out with his intro to CS class.

They recently got a larger mid-term assignment which involves implementing some well-known, basic data structures, and the standard functionality associated with them.

In one of the problems, they were given skeleton code to a bit more complex functionality - and their task is to explicitly use LLM of their choice to fill in the code, and test if the code works, using a set of tests.

I think the class in general has been updated to assume that students are using LLMs more and more, as the problem sets this year are longer and more complex, compared to those of past years (which were made available to all current students).

tkgally 4 hours ago

Ever since ChatGPT came out, I’ve been discussing it and other AI tools with the students in the university classes I teach. My impression matches the results of this survey. Some of the students have started following AI developments closely and using the tools, but many of them don’t seem interested and they wonder why I talk about it so much. Even when I told the students that they could use AI when doing some of their assignments, it was clear from their distinctive writing styles and grammatical mistakes that most of them had used it only sparingly or not at all.

Onavo 4 hours ago

For the STEM students, what they need is statistics (with a focus on high level Bayesian and statistical learning theory, not just frequentist regression tricks) and differential equations. Then they can build AI. AI, or specifically deep neural network enabled machine learning, isn't some sort of magical black box solution with no weaknesses. (I will however admit that it is the best universal function approximator that we currently know of) Otherwise you end up with an uneducated public whose main education is from Hollywood and ChatGPT.

You need to start from high school, the AP classes need to be revamped. Currently they are focused on purely frequentist statistics. Frequentist statistics is great for most empirical sciences like biology. The formulas are mostly plug and play and even pure life science people with no mathematical talent can wield them without trouble. The problem is that they are very far from statistical learning.

Here's the current AP stats curriculum, it is meant to be equivalent to Stat 1.

https://library.fiveable.me/ap-stats

If you want to develop a strong foundation for ML, Unit 6, 7, and 8 ought to be thrown out entirely. The level they are taught at doesn't really teach anything more than plugging formulas. Unit 4.5 (Conditional Probability) and Unit 5 (Sampling) need to be further developed to cover the Bayesian theories, perhaps a segue into graphical models and Markov chains. Generative ML for example interprets likelihood as an information generator (since in the Bayesian formula, it is roughly the "inverse" of conditional probability), unfortunately most stats classes outside of physics and high level ML theory will never mention this. Heck most classically trained statisticians won't ever encounter this idea. But it is the bread and butter of generative AI. Having a vague idea of KL-divergence and what Metropolis-Hastings is coming out of high school is infinitely more useful for their career in ML than knowing how to fiddle with a p-value. You can teach most of these concept without calculus if you simplify some things and replace integrals with their discrete summation versions. Rejection sampling for example is very easy to teach. The Common Core needs a revamp, and perhaps it's time to shift away from the historical focus on calculus/pre-calc as the central tenet of pre-college mathematical teaching.

  • dxbydt an hour ago

    Throwing out units 6,7,8 is a horrible, horrible idea. That’s the only useful part of the curriculum. In fact, most of the rest of the sections are not even stat. They are just plain math. Rest of your suggestions are just plain wild. If there is some unicorn high schooler out there who meets your wild template - knows no calculus but knows rejection sampling and MH - I would like to see which foolish employer hires this clown.

  • kubb 4 hours ago

    But that’s how employers understand AI. It’s ChatGPT.

  • throwaway314155 4 hours ago

    The article is more discussing a sort of tool-level competency that is lacking. This looks more like adding classes to teach how to incorporate generative AI into learning and way less like the very advanced curriculum you're prescribing. Think "how to effectively use a search engine" (or even just typing) classes but for ChatGPT.

    Believe me, most students of the common core don't know or care about the difference between Bayesian and frequentist statistics. Are you only interested in helping the straight-A students that have better chances getting into Harvard? Did you read the article?

rsynnott 3 hours ago

Oh, ffs. What are “ai skills”?

I’m reminded that the big thing when I was in college was XML databases. XML databases, we were assured (though not particularly convincingly) were the future. I didn’t opt for the course covering XML databases, and somehow survive 20 years later; meanwhile no-one really remembers what an XML database even was.

(It was, all told, a rather boring time for tech fads. The 90s AI bubble had imploded with such finality that people barely dared utter the term 'AI', the dot-com crash had just happened, and the gloss was off CORBA, so weird XML-y stuff got pressed into service for a few years until the next thing came along.)

bamboozled 4 hours ago

Who cares what people want \s

  • joegibbs 3 hours ago

    It's not really a representative sample of what people want, only 2% responded to the section about AI and only half of those are anti-AI. You'd have to assume as well that the people most likely to respond to the AI section would be the people who hate AI the most.

  • Nevermark 4 hours ago

    I expect most people don’t want their preferences to be coddled in a way that leaves them at a disadvantage when they graduate.

    AI is improving fast enough that maintaining fluency with its benefits seems like a credible “very good idea” for a lot of people.

    • amonith 4 hours ago

      Tbh, as a AI power user (dev) - what is there to learn? Maybe one day to figure out that you have to write prompts in a certain way but you don't have to learn that "before the job" in any shape or form. It's way more important to know what to ask about and if the AI is not hallucinating (so, the actual stuff those students learn currently) than any tools. It's pointless like wasting time teaching highschoolers how to fill tax forms.

    • bamboozled 4 hours ago

      I mean, we're getting "ai" whether we like it or not so yeah

cheema33 4 hours ago

AI is just another tool. A pretty good one. And there is a skill to using it efficiently. If you ask it dumb questions, you will get dumb answers. Garbage in, garbage out.

I am a software developer and I hire devs as well. If somebody is ignorant of AI or refuses to use it, that is hard pass for me. It is not all that different from an accountant not wanting to use computers. Sure, you could still do some work. But, you will not be competitive.

  • jillesvangurp 4 hours ago

    Exactly. I expect people to know, be proficient with, and use all relevant tools available to them. Including AI; but not just AI. There are a lot of people trying to argue why they don't want/need to use certain tools. A job interview would be the wrong place to have that argument. A good, open interview question these days would be asking candidates how they are using AI in their work and what challenges they are facing with it.

    Say, you are a python developer and you are working on some FastAPI rest service. Do you 1) ask chat GPT to generate documentation for your API. 2) do this by hand. or 3) routinely skip that sort of thing. 1) would be the correct answer. 2) would be you being inefficient and slow 3) would be lazy. It takes 1 minute to generate perfectly usable documentation. Tweak it a little if you need to and job done.

    Bonus points for generating most/all of the endpoints. I did that a few weeks ago. And the tests that prove it works. And the documentation. And the bloody README as well. Slightly tedious and you need to be good at prompting to get what you need. But I managed.

    Artisanal software is not going to be a thing for very long. I expect people to get standard stuff like that out of the way efficiently; not to waste days doing things manually.

    I would also encourage candidates to use chat gpt during coding tests. If I actually believed in those, which I don't. I'd be more interested in their ability to understand the challenge then to produce a working solution from memory. Use tools for that.

    • cudgy 30 minutes ago

      > It takes 1 minute to generate perfectly usable documentation.

      Ok. So why generate it at all? Just have the user generate it when needed or automate as part of the build. Seems like option 3, the lazy option, might be the right one.

    • omg7584 3 hours ago

      I agree. But one thing I cannot quite put my finger on..

      I mean we know how - and why - to write documentation. We have all basic skills and just use LLMs to automate that for us. These skills, however, were won through endless manual labour, not by reading about it. We practiced until we could do it with our eyes closed.

      Where will the next generation come from .. ? I appreciate most companies don't have to worry about this, but if I were the head of a very large multi-generational enterprise I would worry about the future of knowledge workers. AGI better pan out or we are all fucked.

    • mschuster91 3 hours ago

      > Bonus points for generating most/all of the endpoints.

      Swagger has solved that one for years now. In any case I think it's foolish to have multiple competing "sources of truth"... in the worst case you have stuff written on your website/Confluence/wiki, some manually written docs on Dockerhub, a README in the repository, an examples folder in the repository, Javadoc-based comments on classes and functions, Swagger docs (semi-)autogenerated from these, inline documentation in the code, years worth of shit on StackOverflow and finally whatever garbage ChatGPT made out of ingesting all of that.

      And (especially in the Javascript world) there's so much movement and breakage that all these "sources of truth" are outdated - especially StackOverflow and ChatGPT - which leaves you as the potential user of a library/API often enough with no other choice than to delve deep into the code, made even messier by the intricacies of build systems, bundlers and include/module systems (looking at you Maven/Gradle/Java Modules and webpack/rollup/npm/yarn/AMD/CommonJS/ESM). The worst nightmare is "examples" that clearly haven't been run in years, otherwise the example would have reflected a breaking API change whose commit is five years ago. That's a sure way of sending your users into rage fits.

      IMHO, there is only one way: your "examples" should be proper unit/integration/e2e tests (that way you can show your users how you intend for subscribers to use your interface, and you'll notice when your examples are broken because they are literally your tests!), and your documentation (no matter the format, be it HTML, Markdown or Swagger) should all be auto-generated from in-code Javadoc/whatever. Inline code should be reserved for stuff that's only needed to be known for someone intending to work on the actual library, to describe why you took a certain way of implementing a Thing (say, to work around some sort of deficiency) - think of the infamous "total hours wasted here" joke [1].

      My to-go example, even if it is not perfect (the website is manually maintained, but the maintainers are doing a fucking good job at that!) is the Symfony PHP framework. With the exception of the security stack (that one is an utter, utter nightmare to keep up with), their documentation is excellent, legible, and easy to understand, and the examples they provide on usage are clear to the point. Even if you haven't worked with it for a year or two, it's so easy to get up to speed again.

      [1] https://nickyreinert.medium.com/ne-13-total-hours-wasted-her...

      • jillesvangurp 3 hours ago

        > Swagger has solved that one for years now.

        I was referring to both the implementation and generating good/complete documentation strings for that, which is tedious and repetitive work. Obviously, I'm using openapi (aka. swagger); support for that is built into fastapi. That's one of the reasons I picked that.

        The tests are also generated but I screened them manually and iterated on them with chat gpt (also test this, what about that edge case, etc.). I know how to write good tests. But generating them is a way better use of my time than writing them manually. Especially spelling out all the permutations of what can go wrong and should be tested and asserting for that. AIs are better at that sort of thing.

        These are simple crud endpoint. I've written loads of them over the years. If you are still doing that manually, you are wasting time.

  • kachapopopow 4 hours ago

    Some people are just really good without AI so I wouldn't immediately hard pass on them.

    • kolinko 4 hours ago

      Perhaps if they are super specialised in one narrow niche. The moment they need to write code in a new language/framework/algorirhms/domain, they benefit tremendously from chatgpt.

      Or even just redacting their documentation entries.

      I’d say the main devs that don’t benefit is mid-level ones. Good enough to do some programming, but not good enough to figure out how to benefit from gpt.

      • omg7584 3 hours ago

        It's interesting most of you equate being "good" with being able to close a lot of tickets.

        I'd take a collegue with knowledge of the fundamentals and that knows how to ask the right questions - and when not to - over "produces a metric shit ton of code, but when asked doesn't actually know anything".

        The first one might be slow and might be editing his, horrors or horros, documentation entries by hand, but that's because he is thinking and refining while he is doing it. He is not an automaton spitting out "work units".

        After some contemplation he will wipe the whole project off the table, because in the grand scheme of things it doesn't pan out while collegue number two was busy generation a metric shit-ton of work units and being happy about his productivity.

        No LLM will tell you that or more specifically, no LLM will tell you that if you don't know what to ask for in the first place.

    • jillesvangurp 4 hours ago

      They are still going to be slow doing things manually.

  • u20241003 4 hours ago

    Damn, I would be sending AI-generated pull-requests your way all day! Closing tickets left and right.

    • jillesvangurp 4 hours ago

      As long as those are good pull requests, more power to you. Why aren't you?

      • EraYaN 3 hours ago

        Mostly because most project ban you if you start auto generating PRs and patches or even issues. Most LLMs are just not good enough yet. We have seen this with curl and a bunch of other projects where LLMs were used to do security issue reporting and well the maintainers were not too happy talking to a bot that didn't know what the hell it was doing.

        It's essentially a DoS attack on the maintainers.

atleastoptimal 4 hours ago

students will only need AI skills for the next 2-3 years, after which point AGI will render the "need" to have any skills meaningless as it would be to expect students to have woodworking or metalsmithing skills

  • DiscourseFan 4 hours ago

    Yes, but more precisely the skill of needing to use anything going by the name of AI...

  • phito 4 hours ago

    I'm so, soooo sick of people claiming so certainly AGI will be there in X time. You don't know. Nobody knows. Anyone claiming they know are full of shit. Stop speaking in absolutes.

    • charlieyu1 4 hours ago

      I still remember people said fusion energy will be available in a few decades, in a few decades ago.

      • lordnacho 4 hours ago

        On the other hand, we actually do have a computer, with access to all knowledge, in our pockets, right now.

        EDIT.

        The point of the comment is not to say that you will in fact have all knowledge available. There's guidelines about how you're supposed to read here on HN.

        The point was that this device that people could only dream of a few decades ago actually became available.

        Maybe it's worth reminding people that you used to not be able to sit on your butt at home and still access just about any undergraduate level material you could think of, along with a street map of everywhere, as well as a way to take pictures, deal with your finances, and a zillion other applications.

        • shakna 3 hours ago

          All knowledge? Hardly. A ton of the things I research just aren't available except in hard copy - and even then, there may be only three or four copies in the entire country.

        • omg7584 3 hours ago

          Yes and as it turns out, access to information wasn't the problem..