I see the shit that people send out, obvious LLM crap, and wonder how poor they must write to consider the LLM output worth something. And then wonder if the people consuming this LLM crap are OK with baseline mediocrity at best. And that’s not even getting into the ethical issues of using it.
Even that I would consider weekday unjust. User data would HAVE to be opt IN.
Every poll is instantly skewed with the user base. I think Ai is amazing, but it’s not worth the hype. Im cautious about its actual uses and its spectacular failures. Im not a Fuck AI person. But im also not an Ai is going to be our God in 2 years person. And I feel like im more the average.
As a DuckDuckGo user who uses claude and ChatGPT every day, I don’t want AI features in duck duck go because I probably would never use them. So many companies are adding chatbot features and most of them can’t compete with the big names. Why would I use a bunch of worse LLMs and learn a bunch of new interfaces when I can just use the ones I’m already comfortable with
Purveyor of fine slop, are ya?
As much as I agree with this poll, duck duck go is a very self selecting audience. The number doesn’t actually mean much statistically.
If the general public knew that “AI” is much closer to predictive text than intelligence they might be more wary of it
There was no implication that this was a general poll designed to demonstrate the general public’s attitudes. I’m not sure why you mentioned this.
Because that’s how most people implicitly frame headlines like this one: a generalization of the public
The poll didn’t even ask a real question. “Yes AI or no AI?” No context.
I mean you Gotta Hand it to “Ai” - it is very sophisticated, and Ressource intensive predicitive Text.
I think most people find something like chatgpt and copilot useful in their day to day lives. LLMs are a very helpful and powerful technology. However, most people are against these models collecting every piece of data imaginable from you. People are against the tech, they’re against the people running the tech.
I don’t think most people would mind if a FOSS LLM, that’s designed with privacy and complete user control over their data, was integrated with an option to completely opt out. I think that’s the only way to get people to trust this tech again and be onboard.
In the non tech crowds I have talked to about these tools, they have been mostly concerned with them just being wrong, and when they are integrated with other software, also annoyingly wrong.
Idk most people I know don’t see it as a magic crystal ball that’s expected to answer all questions perfectly. I’m sure people like that exist, but for the most part I think people understand that these LLMs are flawed. However, I know a lot of people who use them for everyday tasks like grammar checks, drafting emails/documents, brainstorming, basic analysis, and so on. They’re pretty good at these sort of things because that’s what they’re built for. The issue of privacy and greed remain, and I think some of the issues will at least be partially solved if they were designed with privacy in mind.
If I understand right, the usefulness of basic questions like “Hey ChatGPT, how long do I boil pasta” is offset by the vast resources needed to answer that question. We just see it as simple and convenient as it tries to invest in its “build up interest” phase and runs at a loss. If the effort to sell the product that way fails, it’s going to fund itself by harvesting data.
I don’t disagree per se, but I think there’s a pretty big difference between people using chatgpt for correcting grammar or drafting an email and people using it generate a bunch of slop images/videos. The former is a more streamlined way to use the internet which has value, while the latter is just there for the sake of it. I think its feasible for newer LLM designs to focus on what’s actually popular and useful, and cutout the fat that’s draining a large amounts of resources for no good reason.
I’m enjoying how ludicrous the idea of a “privacy friendly AI” is- trained on stolen data from inhaling everyone else’s data from the internet, but cares suddenly about “your” data.
It’s not impossible. You could build a model that’s built on consent where the data it’s trained on is obtained ethically, data collected from users is anonymized, and users can opt out if they want to. The current model of shameless theft isn’t the only path there is.
I think the idea is anonymous querying.
We can say maybe a personal LLM trained on data that you actually already own and having the infrastructure being self efficient sure but visual generation llms and data theft isn’t cool
We can agree to that
Amen brother (or sister)
I think you’re wildly overestimating how much people care about their personal data.
I think you may find yourself in the minority.
The other 10% are bots
Most objective article (sarcasm)
In fact it has a whole-ass “AI” chatbot product, Duck.ai, which is bundled in with DuckDuckGo’s privacy VPN for $10 a month
What’s wrong about this?
Oh, I meet you again. I haven’t logged in for a long time 😅
I would like to petition to rename AI to
Simulated
Human
Intelligence
Technologyor: computer rendered anonymized plagiarism
It’s funny but it still lies about intelligence
‘Intelligence’ is a measure, not an absolute. Human intelligence can range anywhere from genius to troglodyte. But you’re right, still not human, still at very best simulated, and isn’t capable of reason, just the illusion of reason.
On public transit, it bottoms out.
I would like to petition to rename AI to
Fucking stupid and useless
FSAU? That’s not a word.
Tell me you don’t know shit about LLMs without telling me so:
I guess they haven’t asked me or it’d be 91%
You’ve been learning statistics from an LLM, haven’t you?
I’m a proud graduate from the Terrence Howard school of math.
1 * 1 = 2, because reasons
deleted by creator
This guy knows the SHIT out of statistics!
Okay, so that’s not what the article says. It says that 90% of respondents don’t want AI search.
Moreover, the article goes into detail about how DuckDuckGo is still going to implement AI anyway.
Seriously, titles in subs like this need better moderation.
The title was clearly engineered to generate clicks and drive engagement. That is not how journalism should function.
Well, that’s how journalism has always functioned. People call it “clickbait” as if it’s something new, but headlines have always been designed to grab your attention and get you to read.
That is the title from the news article. It might not be how good journalism would work, but copying the title of the source is pretty standard in most news aggregator communities.
Unless I’m mistaken this title is generated to match the title at the link. Are you saying the mods should update titles to accurately reflect the content of the articles posted?
Also, Duck Duck Go is a search engine. What other ai would it do?
It has a separate llm chat interface, and you can disable the ai summary that comes up on web search results.
Most people against Ai probably have their nose in SearXNG and 4get.
I find Google Gemini to be useless and annoying. I do enjoy grok but I don’t think life would end if we walked away from Ai tomorrow.
anyone under 74 use DuckDuckGo?
Yes
ಠ_ಠ
Get off my
lawnsearch engine!
I think LLMs are fine for specific uses. A useful technology for brainstorming, debugging code, generic code examples, etc. People are just weary of oligarchs mandating how we use technology. We want to be customers but they want to instead shape how we work, as if we are livestock
I am explicitly against the use case probably being thought of by many of the respondents - the “ai summary” that pops in above the links of a search result. It is a waste if I didn’t ask for it, it is stealing the information from those pages, damaging the whole WWW, and ultimately, gets the answer horribly wrong enough times to be dangerous.
Right? Like let me choose if and when I want to use it. Don’t shove it down our throats and then complain when we get upset or don’t use it how you want us to use it. We’ll use it however we want to use it, not you.
I should further add - don’t fucking use it in places it’s not capable of properly functioning and then trying to deflect the blame on the AI from yourself, like what Air Canada did.
When Air Canada’s chatbot gave incorrect information to a traveller, the airline argued its chatbot is “responsible for its own actions”.
Artificial intelligence is having a growing impact on the way we travel, and a remarkable new case shows what AI-powered chatbots can get wrong – and who should pay. In 2022, Air Canada’s chatbot promised a discount that wasn’t available to passenger Jake Moffatt, who was assured that he could book a full-fare flight for his grandmother’s funeral and then apply for a bereavement fare after the fact.
According to a civil-resolutions tribunal decision last Wednesday, when Moffatt applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight – and it wouldn’t offer the discount. Instead, the airline said the chatbot was a “separate legal entity that is responsible for its own actions”. Air Canada argued that Moffatt should have gone to the link provided by the chatbot, where he would have seen the correct policy.
The British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees
They were trying to argue that it was legally responsible for its own actions? Like, that it’s a person? And not even an employee at that? FFS
You just know they’re going to make a separate corporation, put the AI in it, and then contract it to themselves and try again.
ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees
That is a tiny fraction of a rounding error for a company that size. And it doesn’t come anywhere near being just compensation for the stress and loss of time it likely caused.
There should be some kind of general punitive “you tried to screw over a customer or the general public” fee defined as a fraction of the companies’ revenue. Could be waived for small companies if the resulting sum is too small to be worth the administrative overhead.
It’s a tiny amount, but it sets an important precedent. Not only Air Canada, but every company in Canada is now going to have to follow that precedent. It means that if a chatbot in Canada says something, the presumption is that the chatbot is speaking for the company.
It would have been a disaster to have any other ruling. It would have meant that the chatbot was now an accountability sink. No matter what the chatbot said, it would have been the chatbot’s fault. With this ruling, it’s the other way around. People can assume that the chatbot speaks for the company (the same way they would with a human rep) and sue the company for damages if they’re misled by the chatbot. That’s excellent for users, and also excellent to slow down chatbot adoption, because the company is now on the hook for its hallucinations, not the end-user.
Definitely agree, there should have been some punitive damages for making them go through that while they were mourning.
…what kind of brain damage did the rep have to think that was a viable defense? surely their human customer service personnel are also responsible for their own actions?
It makes sense to do it, it’s just along the lines of evil company.
If they lose, it’s some bad press and people will forget.
If they win, they’ve begun setting precedent to fuck over their customers and earn more money. Even if it only had a 5% chance of success, it was probably worth it.
sure but there was no way that wouldn’t have been thrown out.
But the shareholders… /s














