Lots of people on Lemmy really dislike AI’s current implementations and use cases.
I’m trying to understand what people would want to be happening right now.
Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?
Thanks for the discourse. Please keep it civil, but happy to be your punching bag.
Most importantly, I wish countries would start giving a damn about the extreme power consumption caused by AI and regulate the hell out of it. Why do we need to lower our monitors refresh rate while there is a ton of energy used by useless AI agents instead that we should get rid of?
Make AIs OpenSource by law.
TBH, it’s mostly the corporate control and misinformation/hype that’s the problem. And the fact that they can require substantial energy use and are used for such trivial shit. And that that use is actively degrading people’s capacity for critical thinking.
ML in general can be super useful, and is an excellent tool for complex data analysis that can lead to really useful insights…
So yeah, uh… Eat the rich? And the marketing departments. And incorporate emissions into pricing, or regulate them to the point where it only becomes viable to non-trivial use cases.
I don’t dislike ai, I dislike capitalism. Blaming the technology is like blaming the symptom instead of the disease. Ai just happens to be the perfect tool to accelerate that
First of all stop calling it AI. It is just large language models for the most part.
Second: immediate carbon tax in line with the current damage expectations for emissions on the energy consumption of datacenters. That would be around 400$/tCO2 iirc.
Third: Make it obligatory by law to provide disclaimers about what it is actually doing. So if someone asks “is my partner cheating on me”. The first message should be “this tool does not understand what is real and what is false. It has no actual knowledge of anything, in particular not of your personal situation. This tool just puts words together that seem more likely to belong together. It cannot give any personal advice and cannot be used for any knowledge gain. This tool is solely to be used for entertainment purposes. If you use the answers of this tool in any dangerous way, such as for designing machinery, operating machinery, financial decisions or similar you are liable for it yourself.”
First of all stop calling it AI. It is just large language models for the most part.
Leave it to the anti-AI people to show their misunderstandings fast and early. LLMs are AIs, they’re not general AIs
The path finding system of most games with enemies are also AI. Its a generic term
When someone thinks of the word “AI”, they probably think of a sentient computer, not a lot of math. Its causing confusion on how it works and what it can do
"This tool is exclusively built to respond to your chats how a person would. this includes claiming it knows things reguardless of it actually does. it’s knolage is limited to it’s ‘training’ process’ "
deleted by creator
Agreed LLMs for mass consumption should come with some disclaimer
Lately, I just wish it didn’t lie or make stuff up. And after drawing attention to false information, it often doubles-down, or apologises, and just repeats the bs.
If it doesn’t know something, it should just admit it.
LLM don’t know that they are wrong. It just mimics how we talk, but there is no conscious choice behind the words used.
It just tries to predict which word to use next, trained on a ungodly amount of data.
The technology side of generative AI is fine. It’s interesting and promising technology.
The business side sucks and the AI companies just the latest continuation of the tech grift. Trying to squeeze as much money from latest hyped tech, laws or social or environmental impact be damned.
We need legislation to catch up. We also need society to be able to catch up. We can’t let the AI bros continue to foist more “helpful tools” on us, grab the money, and then just watch as it turns out to be damaging in unpredictable ways.
I agree, but I’d take it a step further and say we need legislation to far surpass the current conditions. For instance, I think it should be governments leading the charge in this field, as a matter of societal progress and national security.
For it to go away just like Web 3.0 and NFTs did. Stop cramming it up our asses in every website and application. Make it opt in instead of maybe if you’re lucky, opt out. And also, stop burning down the planet with data center power and water usage. That’s all.
Edit: Oh yeah, and get sued into oblivion for stealing every copyrighted work known to man. That too.
Edit 2: And the tech press should be ashamed for how much they’ve been fawning over these slop generators. They gladly parrot press releases, claim it’s the next big thing, and generally just suckle at the teet of AI companies.
I’m perfectly ok with AI, I think it should be used for the advancement of humanity. However, 90% of popular AI is unethical BS that serves the 1%. But to detect spoiled food or cancer cells? Yes please!
It needs extensive regulation, but doing so requires tech literate politicians who actually care about their constituents. I’d say that’ll happen when pigs fly, but police choppers exist so idk
Gen AI should be an optional tool to help us improve our work and life, not an unavoidable subscription service that makes it all worse and makes us dumber in the process.
I do not need AI and I do not want AI, I want to see it regulated to the point that it becomes severly unprofitable. The world is burning and we are heading face first towards a climate catastrophe (if we’re not already there), we DONT need machines to mass produce slop.
What do I really want?
Stop fucking jamming it up the arse of everything imaginable. If you asked for a genie wish, make it it illegal to be anything but opt in.
I think it’s just a matter of time before it starts being removed from places where it just isn’t useful. For now companies are just throwing it at everything to see what sticks. WhatsApp and JustEat added AI features and I have no idea why or how it could be used for those services and I can’t imagine people using them.
A breakthrough in AI Alignment reaserch.
Ruin the marketing. I want them to stop using the key term AI and use the appropriate terminology narrow minded AI. It needs input so let’s stop making up fantasy’s about AI it’s bullshit in truth.
The term artificial intelligence is broader than many people realize. It doesn’t refer to a single technology or a specific capability, but rather to a category of systems designed to perform tasks that would normally require human intelligence. That includes everything from pattern recognition, language understanding, and problem-solving to more specific applications like recommendation engines or image generation.
When people say something “isn’t real AI,” they’re often working from a very narrow or futuristic definition - usually something like human-level general intelligence or conscious reasoning. But that’s not how the term has been used in computer science or industry. A chess-playing algorithm, a spam filter, and a large language model can all fall under the AI umbrella. The boundaries of AI shift over time: what once seemed like cutting-edge intelligence often becomes mundane as we get used to it.
So rather than being a misleading or purely marketing term, AI is just a broad label we’ve used for decades to describe machines that do things we associate with intelligent behavior. The key is to be specific about which kind of AI we’re talking about - like “machine learning,” “neural networks,” or “generative models” - rather than assuming there’s one single thing that AI is or isn’t.
All of this is permutation based coding don’t bullshit me. AI is being used for an MLM scam.
If you don’t like how AI is being hyped or used in business, fair enough. But saying it’s all just “permutation-based coding” isn’t accurate - it’s not some hard-coded script shuffling words around. These systems are trained on massive amounts of data to learn patterns in language, and they generate responses based on that. You can be skeptical without throwing out the whole concept or pretending it’s just smoke and mirrors.
Destroy capitalism. That’s the issue here. All AI fears stem from that.
- Trained on stolen ideas: ✅
- replacing humans who have little to no safety net while enriching an owner class: ✅
- disregard for resource allocation, use, and pollution in the pursuit of profit: ✅
- being forced into everything as to become unavoidable and foster dependence: ✅
Hey wow look at that, capitalism is the fucking problem again!
God we are such pathetic gamblemonkeys, we cannot get it together.
deleted by creator
Maybe, maybe not, but call it stolen creations instead, point still valid
stolencopied creationsWhen something is stolen the one who originally held it no longer has it anymore. In other words, stealing covers physical things.
Copying is what you’re talking about and this isn’t some pointless pedantic distinction. It’s an actual, real distinction that matters from both a legal/policy standpoint and an ethical one.
Stop calling copying stealing! This battle was won by every day people (Internet geeks) against Hollywood and the music industry in the early 2000s. Don’t take it away from us. Let’s not go back to the, “you wouldn’t download a car” world.