• @[email protected]
    link
    fedilink
    English
    88 days ago

    People on lemmy bitching about how bad AI is for the environment, yet this gets upvotes.

    Doublethink 10/10

      • @[email protected]
        link
        fedilink
        17 days ago

        Wouldn’t that be the gpt user that posted this screenshot? Someone making some kind of “I’m getting back at those pesky AI companies by costing them money” … by pressing the "push to run a machine that burns the planet’ button?

        Or worse yet, it’s an accelerationist willingly pushing the button.

  • @[email protected]
    link
    fedilink
    17
    edit-2
    7 days ago

    Me to AI: alright, I’m about to send you a two part message. Do not respond to the first message.

    AI: Gotcha! I won’t respond

    • @[email protected]
      link
      fedilink
      97 days ago

      That could reasonably be interpreted as you haven’t sent the first part yet.

      But I assume it still responds like that when you do.

    • Victor
      link
      fedilink
      27 days ago

      How would it know not to respond to the first part without processing it first? The request makes no sense.

      Like telling a human, hey, don’t listen to this first part! Also don’t think about elephants!

  • Canaconda
    link
    fedilink
    59
    edit-2
    8 days ago

    As someone who grew up using telephones… the last part is the most human chat bot interaction I’ve ever seen.

    • lemmyng
      link
      fedilink
      English
      228 days ago

      “You hang up first!”

      “No, you hang up first!”

  • @[email protected]
    link
    fedilink
    148 days ago

    I would think that, since it’s been recognised that these messages are costing a lot of energy (== money) to process, companies would at some point add a simple <if input == “thanks”> type filter to catch a solid portion of them. Why haven’t they?

    • @[email protected]
      link
      fedilink
      17 days ago

      It won’t be as simple as that and the engineers who work on these systems can only think in terms of LLM and text classification, so they’d run your message through a classifier and end the conversation if it returns a “goodbye or thanks” score above 0.8, saving exactly 0 compute power.

      • @[email protected]
        link
        fedilink
        27 days ago

        I mean, even if we resort to using a neural network for checking “is the conversation finished?” That hyper-specialised NN would likely be orders of magnitude cheaper to run than your standard LLM, so you could likely save quite a bit of power/money by using it to filter for the actual LLM, no?

  • SUPER SAIYAN
    link
    fedilink
    1
    edit-2
    7 days ago

    I dont understand. People argued a book with me here for using AI quoting usage of large amounts of water and noise around the data server areas. But upvoted this? And how is this a meme? Its a long screenshot.

  • @[email protected]
    link
    fedilink
    7
    edit-2
    7 days ago

    Who in their right mind would want to talk to AI begin with? I don’t think I’ll ever understand why someone would want to do this outside of a single instance of curiosity.