In early tests at a workshop attended by humanitarian organizations, refugee aid groups, and nonprofits, Albrecht and Fournier-Tombs said the reactions were strong and that many were negative. “Why would we want to present refugees as AI creations when there are millions of refugees who can tell their stories as real human beings?” one person said
I love how the article then proceeds to not answer this question. What a dumb idea. What a waste of UN funds.
I feel like the article answers the question, or rather it gives the researchers a chance to answer the question:
When I spoke with them, both Albrecht and Fournier-Tombs were clear that the goal of the workshop was to spark conversation and deal with the technology now, as it is.
“We’re not proposing these as solutions for the UN, much less UNHCR (United Nations High Commissioner for Refugees). We’re just playing around with the concept,” Albrecht said. “You have to go on a date with someone to know you don’t like ‘em.”
Fournier-Tombs said that it’s important for the UN to get a handle on AI and start working through the ethical problems with it. “There’s a lot of pressure everywhere, not just at the UN, to adopt AI systems to become more efficient and do more with less,” she said. “The promise of AI is always that it can save money and help us accomplish the mission…there’s a lot of tricky ethical concerns with that.”
She also said that the UN can’t afford to be reactive when it comes to new technology. “Someone’s going to deploy AI agents in a humanitarian context, and it’s going to be with a company, and there won’t be any real principles or thought, consideration, of what should be done,” she said. “That’s the context we presented the conversation in.”
The goal of the experiment, Albrecht said, was always to provoke an emotional reaction and start a conversation about these ethical concerns.
“You create a kind of straw man to see how people attack it and understand its vulnerabilities.”
So if you read the headline and have the obvious visceral reaction, if you are asking yourself that question from the article, it kind of sounds like that is the point. They’re doing it now so that if people see it and say “that’s stupid”, hopefully that stops xAI or someone else from trying this to profit on the suffering of poor people. Alternatively, if people see it and say “wow this actually helped me understand”, that is also useful for the world at large. It doesn’t sound like the latter is the case, but that’s why you test a hypothesis.
Those are kind of non-answers… “Why the fuck are you doing that?” and the answers are all “Well, somebody’s probably doing it at some point, so why don’t we do it now?” or “you gotta try stuff” as if that explains anything. Like, no, there are some things that don’t need to be tested. This is arguing on the level of “Caaaaarl, that kills people!” You don’t need to punch people in the face to know that’s a dumb thing to do. You don’t need to spill milk to know it’s a dumb thing to do. And you sure as fuck don’t need to date somebody you dislike to know that fucking them is a dumb thing to do or create ai refugees as the UN to know it’s a dumb thing to do! Like, what argument is that? We’re not talking to three-year-olds that have never touched a candle! The UN should be able to anticipate the consequences of their actions! ESPECIALLY IF THEY HAD WORKSHOPS WHERE PEOPLE TOLD THEM IT’S A FUCKING DUMB THING TO DO!! So, no, those aren’t answers.
i guess my point is that I understand why the researchers are doing it - the UN gave them money to research ways the UN could use AI, so that is what they did. It’s not like the research is unethical in the sense that it directly harms participants. Maybe it’s a dumb waste of money, but at that point, the question is more for the UN leaders that said “we should give someone money to research AI”. And I don’t know that 404 Media has the pull to interview those people.
the UN gave them money to research ways the UN could use AI, so that is what they did
No, no, no, that’s not an excuse.
If they were, in good faith, researching ways the UN could use AI, this fucking horrible idea would have been thrown out in the first round of brainstorming.
This is a horrible idea. This is a stupid idea. We live in a world where most of the privileged wealthy West is desperate to pretend that refugees aren’t real, or don’t matter, or deserve to live in poverty. And creating fake AI refugees just gives the privileged wealthy West another way to excuse themselves, by dismissing what the AI says is fake, by telling themselves there aren’t any real people in situations that bad.
If you’re getting to the point where you’re implementing an obviously horrible idea and asking for public feedback on it, you don’t get to blame the people who told you to come up with ideas. You should have thrown that bad idea out. You should not have implemented it. That’s on you.
the UN gave them money to research ways the UN could use AI, so that is what they did.
That’s kind of my point… They didn’t. To research ways the un could use ai, you could have workshops and interviews with various groups, experts and non-experts alike. You don’t just pick one, utterly insane use case (that is called out beforehand as such) and implement that. You do research on the options and pick either the best ones or, if there’s no good one, none!
To come up with a research project, it has to go through various pitches, drafts and proposals. I can’t imagine every single control instance failing so utterly that this kind of project with this high school level of arguing (“well, we could do this, so why wouldn’t we?”) passes each of them. There has to be a better reason why they did this. And if there really isn’t, a lot of people should ask themselves what the fuck they’re getting paid for if they let this happen - and some other people if they’re the ones who should fire the former.
I love how the article then proceeds to not answer this question. What a dumb idea. What a waste of UN funds.
I feel like the article answers the question, or rather it gives the researchers a chance to answer the question:
So if you read the headline and have the obvious visceral reaction, if you are asking yourself that question from the article, it kind of sounds like that is the point. They’re doing it now so that if people see it and say “that’s stupid”, hopefully that stops xAI or someone else from trying this to profit on the suffering of poor people. Alternatively, if people see it and say “wow this actually helped me understand”, that is also useful for the world at large. It doesn’t sound like the latter is the case, but that’s why you test a hypothesis.
Those are kind of non-answers… “Why the fuck are you doing that?” and the answers are all “Well, somebody’s probably doing it at some point, so why don’t we do it now?” or “you gotta try stuff” as if that explains anything. Like, no, there are some things that don’t need to be tested. This is arguing on the level of “Caaaaarl, that kills people!” You don’t need to punch people in the face to know that’s a dumb thing to do. You don’t need to spill milk to know it’s a dumb thing to do. And you sure as fuck don’t need to date somebody you dislike to know that fucking them is a dumb thing to do or create ai refugees as the UN to know it’s a dumb thing to do! Like, what argument is that? We’re not talking to three-year-olds that have never touched a candle! The UN should be able to anticipate the consequences of their actions! ESPECIALLY IF THEY HAD WORKSHOPS WHERE PEOPLE TOLD THEM IT’S A FUCKING DUMB THING TO DO!! So, no, those aren’t answers.
i guess my point is that I understand why the researchers are doing it - the UN gave them money to research ways the UN could use AI, so that is what they did. It’s not like the research is unethical in the sense that it directly harms participants. Maybe it’s a dumb waste of money, but at that point, the question is more for the UN leaders that said “we should give someone money to research AI”. And I don’t know that 404 Media has the pull to interview those people.
No, no, no, that’s not an excuse.
If they were, in good faith, researching ways the UN could use AI, this fucking horrible idea would have been thrown out in the first round of brainstorming.
This is a horrible idea. This is a stupid idea. We live in a world where most of the privileged wealthy West is desperate to pretend that refugees aren’t real, or don’t matter, or deserve to live in poverty. And creating fake AI refugees just gives the privileged wealthy West another way to excuse themselves, by dismissing what the AI says is fake, by telling themselves there aren’t any real people in situations that bad.
If you’re getting to the point where you’re implementing an obviously horrible idea and asking for public feedback on it, you don’t get to blame the people who told you to come up with ideas. You should have thrown that bad idea out. You should not have implemented it. That’s on you.
That’s kind of my point… They didn’t. To research ways the un could use ai, you could have workshops and interviews with various groups, experts and non-experts alike. You don’t just pick one, utterly insane use case (that is called out beforehand as such) and implement that. You do research on the options and pick either the best ones or, if there’s no good one, none!
To come up with a research project, it has to go through various pitches, drafts and proposals. I can’t imagine every single control instance failing so utterly that this kind of project with this high school level of arguing (“well, we could do this, so why wouldn’t we?”) passes each of them. There has to be a better reason why they did this. And if there really isn’t, a lot of people should ask themselves what the fuck they’re getting paid for if they let this happen - and some other people if they’re the ones who should fire the former.