• Lovable Sidekick
    link
    fedilink
    English
    20
    edit-2
    2 months ago

    What’s breathtaking is how clueless education system administrators are failing at their jobs. They’ve been screwing up the system for a very long time, and now they have a whole new set of shiny objects to spend your money on.

  • @[email protected]
    link
    fedilink
    English
    82 months ago

    Oh no, maybe teachers will have to put effort into their students beyond assigning homework that an AI can do.

    • Don Piano
      link
      fedilink
      English
      22 months ago

      Do you believe that the point of such assignments is because the teacher desires to read a couple dozen nigh-identical essays on the topic at hand?

      • @[email protected]
        link
        fedilink
        English
        32 months ago

        This is my point exactly. They don’t desire that. Nor should they. And so they shouldn’t do that.

          • @[email protected]
            link
            fedilink
            English
            12 months ago

            Yes, they might have been good teachers if they hadn’t decided to support a mediocre institution that prevents teaching.

            • @[email protected]
              link
              fedilink
              English
              52 months ago

              So they should go into private education do only the rich or lucky can get a good education?

              Or do you think that teachers are the ones directing public education policy?

              Or are you saying that somehow not participating in a flawed system will somehow fix it?

              The entire purpose behind policies that hinder quality education is to drive skilled educators away. To choose not to participate is the best way to expedite the goals of those who benefit from poor quality education.

              • @[email protected]
                link
                fedilink
                English
                22 months ago

                We cannot keep with this horrible system that does nothing but torture kids and teachers alike. It’s not working. It’s clear it’s not working. Change will never come from the top, because like you said, they will never voluntarily change it. So it must come from the bottom.

                • @[email protected]
                  link
                  fedilink
                  English
                  52 months ago

                  So in the mean time we should just abandon students to the people fucking up the system?

                  One can’t just snap their fingers and make everything better, reality does not work that way.

  • @[email protected]
    link
    fedilink
    English
    322 months ago

    I’m thinking the only way people will be able to do schoolwork without cheating now is going to be to make them sit in a monitored room and finish it there.

    • @[email protected]
      link
      fedilink
      English
      92 months ago

      How is this kind of testing relevant anymore? Isn’t it creating an unrealistic situation, given the brave new world of AI everywhere?

        • CherryLips
          link
          fedilink
          English
          42 months ago

          Education and learning are two different things. School tests are to repeat back what has been educated to you. Meaningful learning tends to be internally motivated and AI is unlikely to fulfill that aspect.

      • @[email protected]
        link
        fedilink
        English
        312 months ago

        Because it test what you actually retained, not what you can convince an AI to tell you.

        • @[email protected]
          link
          fedilink
          English
          10
          edit-2
          2 months ago

          But what good is that if AI can do it anyway?

          That is the crux of the issue.

          Years ago the same thing was said about calculators, then graphing calculators. I had to drop a stat class and take it again later because the dinosaur didn’t want me to use a graphing calculator. I have ADD (undiagnosed at the time) and the calculator was a big win for me.

          Naturally they were all full of shit.

          But this? This is different. AI is currently as good as a graphing calculator for some engineering tasks, horrible for some others, excellent at still others. It will get better over time. And what happens when it’s awesome at everything?

          What is the use of being the smartest human when you’re easily outclassed by a machine?

          If we get fully automated yadda yadda, do many of us turn into mush-brained idiots who sit around posting all day? Everyone retires and builds Adirondack chairs and sips mint juleps and whatever? (That would be pretty sweet. But how to get there without mass starvation and unrest?)

          Alternately, do we have to do a Butlerian Jihad to get rid of it, and threaten execution to anyone who tries to bring it back… only to ensure we have capitalism and poverty forever?

          These are the questions. You have to zoom out to see them.

          • Natanael
            link
            fedilink
            English
            24
            edit-2
            2 months ago

            Because if you don’t know how to tell when the AI succeeded, you can’t use it.

            To know when it succeeded, you must know the topic.

            The calculator is predictable and verifiable. LLM is not

            • @[email protected]
              link
              fedilink
              English
              22 months ago

              I’m not sure what you’re implying. I’ve used it to solve problems that would’ve taken days to figure out on my own, and my solutions might not have been as good.

              I can tell whether it succeeded because its solutions either work, or they don’t. The problems I’m using it on have that property.

              • Natanael
                link
                fedilink
                English
                32 months ago

                That says more about you.

                There are a lot of cases where you can not know if it worked unless you have expertise.

                • @[email protected]
                  link
                  fedilink
                  English
                  1
                  edit-2
                  2 months ago

                  This still seems too simplistic. You say you can’t know whether it’s right unless you know the topic, but that’s not a binary condition. I don’t think anyone “knows” a complex topic to its absolute limits. That would mean they had learned everything about it that could be learned, and there would be no possibility of there being anything else in the universe for them to learn about it.

                  An LLM can help fill in gaps, and you can use what you already know as well as credible resources (e g., textbooks) to vet its answer, just as you would use the same knowledge to vet your own theories. You can verify its work the same way you’d verify your own. The value is that it may add information or some part of a solution that you wouldn’t have. The risk is that it misunderstands something, but that risk exists for your own theories as well.

                  This approach requires skepticism. The risk would be that the person using it isn’t sufficiently skeptical, which is the same problem as relying too much on their own opinions or those of another person.

                  For example, someone studying statistics for the first time would want to vet any non-trivial answer against the textbook or the professor rather than assuming the answer is correct. Answer comes from themself, the student in the next row, or an LLM, doesn’t matter.

              • @[email protected]
                link
                fedilink
                English
                42 months ago

                The problem is offloading critical thinking to a blackbox of questionably motivated design. Did you use it to solve problems or did you use it to find a sufficient approximation of a solution? If you can’t deduce why the given solution works then it is literally unknowable if your problem is solved, you’re just putting faith in an algorithm.

                There are also political reasons we’ll never get luxury gay space communism from it. General Ai is the wet dream of every authoritarian: an unverifiable, omnipresent, first line source of truth that will shift the narrative to whatever you need.

                The brain is a muscle and critical thinking is trained through practice; not thinking will never be a shortcut for thinking.

            • @[email protected]
              link
              fedilink
              English
              72 months ago

              It’s already capable of doing a lot, and there is reason to expect it will get better over time. If we stick our fingers in our ears and pretend that’s not possible, we will not be prepared.

              • @[email protected]
                link
                fedilink
                English
                3
                edit-2
                2 months ago

                If you read, it’s capable of very little under the surface of what it is.

                Show me one that is well studied, like clinical trial levels, then we’ll talk.

                We’re decades away at this point.

                My overall point of it’s just as meaningless to talk about now as it was in the 90s. Because we can’t convince of what a functioning product will be, never mind it’s context I’m a greater society. When we have it, we can discuss it then as we have something tangible to discuss. But where we’ll be in decades is hard to regulate now.

              • @[email protected]
                link
                fedilink
                English
                12 months ago

                Specialized AI like that is not what most people know as AI. Most people reffer to it as LLMs.

                Specialized AI, like that showcased, is still decades away from generalized creative thinking. You can’t ask it to do a science experiment with in a class because it just can’t. It’s only built for math proof.

                Again, my argument is that it won’t never exist.

                Just that it’s so far off it’d be like trying to regulate smart phone laws in the 90s. We would have only had pipe dreams as to what the tech could be, never mind its broader social context.

                So tall to me when it can, in the case of this thread, clinically validated ways of teaching. We’re still decades from that.

            • @[email protected]
              link
              fedilink
              English
              12 months ago

              The faulty logic was supported by a previous study from 2019

              This directly applies to the human journalist, studies on other models 6 years ago are pretty much irrelevant and this one apparently tested very small distilled ones that you can run on consumer hardware at home (Llama3 8B lol).

              Anyway this study seems trash if their conclusion is that small and fine-tuned models (user compliance includes not suspecting intentionally wrong prompts) failing to account for human misdirection somehow means “no evidence of formal reasoning”. Which means using formal logic and formal operations and not reasoning in general, we use informal reasoning for the vast majority of what we do daily and we also rely on “sophisticated pattern matching” lmao, it’s called cognitive heuristics. Kahneman won the Nobel prize for recognizing type 1 and type 2 thinking in humans.

              Why don’t you go repeat the experiment yourself on huggingface (accounts are free, over ten models to test, actually many are the same ones the study used) and see what actually happens? Try it on model chains that have a reasoning model like R1 and Qwant and just see for yourself and report back. It would be intellectually honest to verify things since we’re talking about critical thinking in here.

              Oh add a control group here, a comparison with average human performance to see what the really funny but hidden part is. Pro-tip: CS STEMlords catastrophically suck when larping being cognitive scientists.

              • @[email protected]
                link
                fedilink
                English
                12 months ago

                So you say I should be intellectually honest by doing the experiment myself, then say that my experiment is going to be shit anyways? Sure… That’s also intellectually honest.

                Here’s the thing.

                My education is in physics, not CS. I know enough to know what I try isn’t going to be really valid.

                But unless you have peer reviewed searches to show otherwise, because I would take your home grown experiment to be as valid as mine.

                • @[email protected]
                  link
                  fedilink
                  English
                  22 months ago

                  And here’s experimental verification that humans lack formal reasoning when sentences don’t precisely spell it out for them: all the models they tested except chatGPT4 and o1 variants are from 27B and below, all the way to Phi-3 which is an SLM, a small language model with only 3.8B parameters. ChatGPT4 has 1.8T parameters.

                  1.8 trillion > 3.8 billion

                  ChatGPT4’s performance difference (accuracy drop) with regular benchmarks was a whooping -0.3 versus Mistral 7B -9.2 drop.

                  Yes there were massive differences. No, they didn’t show significance because they barely did any real stats. The models I suggested you try for yourself are not included in the test and the ones they did use are known to have significant limitations. Intellectual honesty would require reading the actual “study” though instead of doubling down.

                  Maybe consider the possibility that a. STEMlords in general may know how to do benchmarks but not cognitive testing type testing or how to use statistical methods from that field b. this study being an example of a few “I’m just messing around trying to confuse LLMs with sneaky prompts instead of doing real research because I need a publication without work” type of study, equivalent to students making chatGPT do their homework c. 3.8B models = the size in bytes is between 1.8 and 2.2 gigabytes d. not that “peer review” is required for criticism lol but uh, that’s a preprint on arxiv, the “study” itself hasn’t been peer reviewed or properly published anywhere (how many months are there between October 2024 to May 2025?) e. showing some qualitative difference between quantitatively different things without showing p and using weights is garbage statistics f. you can try the experiment yourself because the models I suggested have visible Chain of Thought and you’ll see if and over what they get confused about g. when there are graded performance differences with several models reliably not getting confused at least more than half the time but you say “fundamentally can’t reason” you may be fundamentally misunderstanding what the word means

                  Need more clarifications instead of reading the study or performing basic fun experiments? At least be intellectually curious or something.

          • HobbitFoot
            link
            fedilink
            English
            112 months ago

            If you want to compare a calculator to an LLM, you could at least reasonably expect the calculator result to be accurate.

            • @[email protected]
              link
              fedilink
              English
              22 months ago

              Why. Because you put trust into the producers of said calculators to not fuck it up. Or because you trust others to vet those machines or are you personally validating. Unless your disassembling those calculators and inspecting their chips sets your just putting your trust in someone else and claiming “this magic box is more trust worthy”

              • HobbitFoot
                link
                fedilink
                English
                12 months ago

                A combination of personal vetting via analyzing output and the vetting of others. For instance, the Pentium calculation error was in the news. Otherwise, calculation by computer processor is understood and the technology is acceptable to be used for cases involving human lives.

                In contrast, there are several documented cases where LLM’s have been incorrect in the news to a point where I don’t need personal vetting. No one is anywhere close to stating that LLM’s can be used in cases involving human lives.

  • @[email protected]
    link
    fedilink
    English
    42 months ago

    Prelude to the society Vonnegut wrote about in ‘Player Piano’ and Bradbury in ‘Farenheit 451’

  • @[email protected]
    link
    fedilink
    English
    142 months ago

    Produce army of people that rely on corporate products to stay alive. What can go wrong ?

  • @[email protected]
    link
    fedilink
    English
    22 months ago

    If we decide to ban smartphones from schools we should ban them from work too. I’m supposed to be writing an article right now and instead I’m here. Then we should ban them from streets so that people have to pay attention to where they are going and the things going on around them. At that point we’d have something like functioning human beings again instead of mindless zombies. We could still have terminals for plugging into the Machine but our time with it should be regulated (like it already is with research clusters) so that we don’t waste energy. There, the whole problem is solved and all it takes is a global butlerian jihad.

  • @[email protected]
    link
    fedilink
    English
    1
    edit-2
    2 months ago

    The teacher uses PowerPoint and multiple choice tests to depict fake effort at teaching, the students use AI to depict fake effort at learning. I see nothing wrong here.

  • HobbitFoot
    link
    fedilink
    English
    162 months ago

    How are other countries handling it? I can’t imagine AI being an American only education issue.

    • @[email protected]
      link
      fedilink
      English
      82 months ago

      American education isn’t actually about education, but about creating compliant cogs for the machinery of the corporate oligarchy. When the goal is the betterment of individuals and society, the methods with which you teach and assess progress will be dramatically different. This is more of an “American problem” than the rest of the world precisely because of how the American system is designed and implemented. It does not value, measure, reinforce, or reward individual betterment… but rote memorization and how compliant you are under the arbitrary authoritarian structure of the system.

      • @[email protected]
        link
        fedilink
        English
        52 months ago

        American education isn’t actually about education, but about creating compliant cogs for the machinery of the corporate oligarchy.

        Well, historically that’s true.

        But the modern American education system is about Stack Ranking to create the illusion of meritocracy. So the functional purpose of the system is to score better than the rest of your classmates. Since the actual lesson plan doesn’t matter and only the honors you get from completing the course are perceived to have value, you either want to cheat the hell out of every course to beat the herd. Or you want to find a degree plan where you can appear to be the Best Kid In Class, either through grade inflation or by participating in a class full of dropouts/fake students.

        It does not value, measure, reinforce, or reward individual betterment… but rote memorization and how compliant you are under the arbitrary authoritarian structure of the system.

        Rote memorization is easy to evaluate, because the answers are discrete and can be fed into a binary grading engine.

        It’s also easy to cheat, because you don’t need to know how to solve the problems, just how to source the correct pattern of answers.

      • Buelldozer
        link
        fedilink
        English
        62 months ago

        Despite your wall of text this isn’t just a problem in the United States.

    • @[email protected]
      link
      fedilink
      English
      272 months ago

      It’s in France and I guess everywhere else. Students can cheat for free and no longer need to do anything, why would they study anymore?

      I’ve also seen a few young engineers using ChatGPT to do their job because it’s easier than working. When I told them their code was bad (with mentoring and help, I’m not an asshole), they used another prompt that changed their whole code but it was still full of bugs.

      We’re doomed.

      • @[email protected]
        link
        fedilink
        English
        52 months ago

        Students can cheat for free and no longer need to do anything, why would they study anymore?

        In theory, they need to study in order to learn the skills necessary to be gainfully employed. But in practice, the promise of the future is “automate everything”, so might as well learn how to maximize the outputs of the Big Grifting Machine while you’re still young.

        Why waste time mastering comprehensive writing when there won’t be any employers left to read what you wrote? Why waste time developing technical skills when everything gets outsourced to the lowest bidding firm in the South Pacific? Why waste time developing a talent for artistry, music, or cinema when we’ve decided the future of performative arts is whatever bot-farm best self-promotes AI slop to the top of the most trending Spotify playlist?

        When I told them their code was bad (with mentoring and help, I’m not an asshole), they used another prompt that changed their whole code but it was still full of bugs.

        Why do they care if the code is full of bugs? They’ll be changing jobs in another two years anyway, because that’s the only way to get a raise. They aren’t invested in the success of their current firm, much less the profitablility of the clients they work for (who are, themselves, likely going to be outsourcing this shit to India in another few years). And all this work is just about maximizing the bottom line for private equity anyway, so why does anyone care if the project succeeds? It’s not like my quality of life hinges on my ability to do useful productive work.

        And if quality of life declines? Just find someone to blame. Migrants. The Wrong Politicians. China. Lizard People. Fuck, I’ll just ask ChatGPT why my life sucks and believe everything it tells me, because… why not? Its not like everyone else isn’t lying.

      • @[email protected]
        link
        fedilink
        English
        52 months ago

        I code for fun, have been doing so for decades and using AI as an helper has been amazing.

        My coding skill in my cursed basic variants (VB6/VBA/vb.net)

        translated overnight to basically any language that I want, it’s just amazing.

        I can almost code in javascript by hand just from exposure, despite never formally trying to learn it

      • HobbitFoot
        link
        fedilink
        English
        22 months ago

        Yeah, I figure this isn’t going to be an American only problem.

      • @[email protected]
        link
        fedilink
        English
        32 months ago

        Apropos of nothing, I read a post claiming that the phonetic pronunciation of “ChatGPT” in France translates to “Cat I farted.” So I used Google Translates audio and sure enough, “ChatGPT” and “Chat j’ai pété” sound nearly identical when piped through the app’s audio feature.

  • @[email protected]
    link
    fedilink
    English
    272 months ago

    Honest question: how do we measure critical thinking and creativity in students?

    If we’re going to claim that education is being destroyed (and show we’re better than our great^n grandparents complaining about the printing press), I think we should try to have actual data instead of these think-pieces and anecdata from teachers. Every other technology that the kids were using had think-pieces and anecdata.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      2 months ago

      Honest question: how do we measure critical thinking and creativity in students?

      The only serious method of evaluating critical thinking and creativity is through peer evaluation. But that’s a subjective scale thick with implicit bias, not a clean and logical discrete answer. It’s also not something you can really see in the moment, because true creativity and critical thinking will inevitably produce heterodox views and beliefs.

      Only by individuals challenging and outperforming the status quo to you see the fruits of a critical and creative labor force. In the moment, these folks just look like they’re outliers who haven’t absorbed the received orthodoxy. And a lot of them are. You’ll get your share of Elizabeth Holmes-es and Sam Altmans alongside your Vincent Van Goghs and Nikolai Teslas.

      I think we should try to have actual data instead of these think-pieces and anecdata from teachers.

      I agree that we’re flush with think-pieces. Incidentally, the NYT Op-Ed section has doubled in size over the last few years.

      But that’s sort of the rub. You can’t get a well-defined answer to the question “Is Our Children Creative-ing?” because we only properly know it by the fruits of the system. Comically easy to walk into a school with a creative writing course and scream about how this or that student is doing creativity wrong. Comically easy to claim a school is Marxist or Fascist or too Pro/Anti-Religion or too banal and mainstream by singling out a few anecdotes in order to curtail the whole system.

      The fundamental argument is that this kind of liberal arts education is wasteful. The output isn’t steady and measureable. The quality of the work isn’t easily defined as above or below the median. It doesn’t yield real consistent tangible economic value. So we need to abolish it in order to become more efficient.

      And that’s what we’re creating. A society that is laser-focused on making economic numbers go up, without stopping to ask whether a larger GDP actually benefits anyone living in the country where all this fiscal labor is performed.

      • @[email protected]
        link
        fedilink
        English
        32 months ago

        I think it’s fine for this to be poorly defined; what I want is something aligned with reality beyond op-eds. Qualitative evidence isn’t bad; but I think it needs to be aggregated instead of anecdoted. Humans are real bad at judging how the kids are doing (complaints like the OP are older than liberal education, no?); I don’t want to continue the pattern. A bunch of old people worrying too much about students not reading shakespear in classes is how we got the cancel culture moral panic - I’d rather learn from that mistake.

        A handful of thoughts: There are longitudinal studies that interview kids at intervals; are any of these getting real weird swings? Some kids have AI earlier; are they much different from similar peers without? Where’s the broad interviews/story collection from the kids? Are they worried? How would they describe their use and their peers use of AI?

        • @[email protected]
          link
          fedilink
          English
          42 months ago

          A bunch of old people worrying too much about students not reading shakespear in classes is how we got the cancel culture moral panic - I’d rather learn from that mistake.

          The “old people complaining about Shakespeare” was the thin end of the wedge intended to defund and dismantle public education. But the leverage comes from large groups of people who are sold the notion that children are just born dumb or smart and education has no material benefit.

          A lot of this isn’t about teaching styles. It’s about public funding of education and the neo-confederate dream of a return to ethnic segregation.

          There are longitudinal studies that interview kids at intervals; are any of these getting real weird swings?

          A lot of these studies come out of public sector federal and state education departments that have been targeted by anti-public education lobbying groups. So what used to be a wealth of public research into the benefits of education has dried up significantly over the last generation.

          What we get instead is a profit-motivated push for standardized testing, lionized by firms that directly benefit from public sector purchasing of test prep and testing services. And these tend to come via private think-tanks with ties back to firms invested in bulk privatization of education. So good luck in your research, but be careful when you see something from CATO or The Gates Foundation, particularly in light of the fact that more reliable and objective data has been deliberately purged from public records.

    • @[email protected]
      link
      fedilink
      English
      222 months ago

      As far as I can tell, the strongest data is wrt literacy and numeracy, and both of those are dropping linearly with previous downward trends from before AI, am I wrong? We’re also still seeing kids from lockdown, which seems like a much more obvious ‘oh that’s a problem’ than the AI stuff.

  • @[email protected]
    link
    fedilink
    English
    122 months ago

    The fact people can’t even use their own common sense on Twitter without using AI for context shows we are in a scary place. AI is not some all knowing magic 8 ball and puts out a ton of misinformation.

  • @[email protected]
    link
    fedilink
    English
    342 months ago

    Imagine paying tens of thousands of dollars (probably of their parents saved money) to go to university and have a chatbot do the whole thing for you.

    These kids are going to get spit out into a world where they will have no practical knowledge and no ability to critically think or adapt.

  • Eugene V. Debs' Ghost
    link
    fedilink
    English
    792 months ago
    • Teachers are overworked, underpaid, some still using course work that hasn’t been updated in years despite what the field has advanced
    • Students go into college due to the social expectation, some even unsure of what to get into as a career or even a class
    • Exceeding above the course requirements does nothing for your GPA, an A that got a “110%” and an A that got 90% are the same.
    • Students failing or passing still rack up debt for this social expectation
    • Teachers still failing to pay bills for this social need

    Yeah AI is the fault here, its not the system at large been fucked over since Reagan.

  • @[email protected]
    link
    fedilink
    English
    92 months ago

    This has always seemed overblown to me. If students want to cheat on their coursework, who cares? As long as exams are given in a controlled environment, it’s going to be painfully obvious who actually studied the material and who had ChatGPT do it for them. Re-taking a course is not going to be fun or cheap.

    Maybe I’m oversimplifying this, but it feels like proctored testing solves the entire problem.

    • @[email protected]
      link
      fedilink
      English
      182 months ago

      Problem is, by the time they’ve failed the test, the opportunity for them to learn the content is largely passed.

      The purpose of school is to educate and teach thinking skills. Tests are just a way to assess how effectively you and your students are achieving that goal. If something (in this case easy access to AI tools in the classroom) is disrupting that teaching/learning process, sure it’s useful to detect that through testing, but I’d doesn’t do anything really to solve the problem. Some fraction of kids are disciplined enough to recognize that skating by on classwork will lead to poor test results and possibly retaking classes, but generally those aren’t the kids you need to worry about anyway.

    • @[email protected]
      link
      fedilink
      English
      122 months ago

      Who would you rather have as a surgeon? The one who did all their coursework by hand and graduated with Bs or the straight-A superstar who got a full ride to John Hopkins by using ChatGPT and just hiding their tracks better than the rest of the class? I’m not saying those are the only two options, but there’s definitely a reason we shouldn’t be so cavalier with cheating

    • sunzu2
      link
      fedilink
      72 months ago

      If everyone does poorly, they will still have to pass some or all.

      • @[email protected]
        link
        fedilink
        English
        102 months ago

        Why? If everyone does poorly, everyone should fail, provided the opportunity to learn was there.

        • @[email protected]
          link
          fedilink
          English
          12 months ago

          In France you cannot fail a middle or high school class anymore. The official explanation is that it hurts the kids’ feelings. The teachers’ explanation is that too many people would fail.

        • sunzu2
          link
          fedilink
          32 months ago

          To is system and they need to move bodies.

          For example, SATs scores started to crater so schools just stop asking for them to expand the pool lol

  • @[email protected]
    link
    fedilink
    English
    132 months ago

    Unfortunately, I think many kids could easily approach AI the same way older generations thought of math, calculators, and the infamous “you won’t have a calculator with you everywhere.” If I was a kid today and I knew I didn’t have to know everything because I could just look it up, instantly; I too would become quite lazy. Even if the AI now can’t do it, they are smart enough to know AI in 10 years will. I’m not saying this is right, but I see how many kids would end up there.