I think AI has mostly been about luring investors into pumping up share prices rather than offering something of genuine value to consumers.
Some people are gonna lose a lot of other people’s money over it.
Yeah, can make some products better but most of the products these days that use AI, it doesn’t actually need them. It’s annoying to use products that actively shovel AI when it doesn’t even need it.
Ya know what pfoduct MIGHT be better with AI?
Toasters. They have ONE JOB, and everybody agrees their toaster is crap. But you’re not going to buy another toaster, because that too will be crap.
How about a toaster, that accurately, and evenly toasts your bread, and then DOESN’T give you a heart attack at 5am when you’re still half asleep???
IS THAT TOO MUCH TO ASK???
Sweet, I’m the one who gets to link the obligatory Technology Connections toaster video!
Aw man, now I want this toaster.
I said the exact same thing months ago when I saw that video. I don’t even use a toaster.
This is the visionary we need. Take my venture capital millions on a magic carpet ride, time traveler!
Nah. We already have AI toasters, and they’re ambitious, but rubbish.
Adding AI is just serious overkill for a toaster, especially when it wouldn’t add anything meaningful, not compared to just designing the toaster better.
It only needs one string of conditions that it can understand: don’t catch on fire. Turn yourself off IF smoke.
AI toasters are a Bad Idea
Yes, I’m getting some serious dot-com bubble vibes from the whole AI thing. But the dot-com boom produced Amazon, and every company is basically going all-in in the hope they are the new Amazon while in the end most will end up like pets.com but it’s a risk they’re willing to take.
OpenAI will fail. StabilityAI will fail. CivitAI will prevail, mark my words.
“You might lose all your money, but that is a risk I’m willing to take”
- visionairy AI techbro talking to investors
Investors pump money in a bunch of companies so the chances of at least one of them making it big and paying them back for all the failed investments is almost guaranteed. That’s what taking risks is all about.
Sure, but it SEEMS, that some investors are relying on buzzword and hype, without research and ignoring the fundamentals of investing, i.e. besides the ever evolving claims of the CEO, is the company well managed? What is their cash flow and where is it going a year from now? Do the upper level managers have coke habits?
You’re right, but these fundamentals don’t really matter anymore, investors are buying hype and hoping to sell a bigger hype for more money later.
Seeing the whole thing as Knowingly Trading in Hype is actually a really good insight.
Certainly it neatly explains a lot.
Also called a Ponzi scheme, where every participant knows it’s a scam, but hopes to find some more fools before it crashes and leave with positive balance.
If the whole sector turns out to be garbage it won’t matter which particular set of companies within it you invest in; you will get burned if you cash out after everyone else.
My doorbell camera manufacturer now advertises their products as using, “Local AI” meaning, they’re not relying on a cloud service to look at your video in order to detect humans/faces/etc. Honestly, it seems like a good (marketing) move.
I tried to find the advert but I see this on YouTube a lot - an Adobe AI ad which depicts, without shame, AI writing out a newsletter/promo for a business owner’s new product (cookies or ice cream or something), showing the owner putting no effort into their personal product and a customer happily consuming because they were attracted by the thoughtless promo.
How are producers/consumers okay with everything being so mediocre??
How are producers/consumers okay with everything being so mediocre??
“You’re always trying to make everything just a little bit worse so that you can feel good about having a lot more of it. I love it. It’s so human!” - The Good Place
How are producers/consumers okay with everything being so mediocre??
I’m not. My particular beef is with is with plastics and toxic materials and chemicals being ubiquitous in everything I buy. Systemic problem that I can do almost nothing about apart from make things myself out of raw materials.
A lot of it is follow the leader type bullshit. For companies in areas where AI is actually beneficial they have already been implementing it for years, quietly because it isn’t something new or exceptional. It is just the tool you use for solving certain problems.
Investors going to bubble though.
Definitely. Many companies have implemented AI without thinking with 3 brain cells.
Great and useful implementation of AI exists, but it’s like 1/100 right now in products.
My old company before they laid me off laid off our entire HR and Comms teams in exchange for ChatGPT Enterprise.
“We can just have an AI chatbot for HR and pay inquiries and ask Dall-e to create icons and other content”.
A friend who still works there told me they’re hiring a bunch of “prompt engineers” to improve the quality of the AI outputs haha
I’m sorry. Hope you find a better job, on the inevitable downswing of the hype, when someone realizes that a prompt can’t replace a person in customer service. Customers will invest more time, i.e., even wait in a purposely engineered holding music hell, to have a real person listen to them.
That’s an even worse ‘use case’ than I could imagine.
HR should be one of the most protected fields against AI, because you actually need a human resource.
And “prompt engineer” is so stupid. The “job” is only necessary because the AI doesn’t understand what you want to do well enough. The only productive guy you could hire would be a programmer or something, that could actually tinker with the AI.
God that sounds like hell.
If my employer is anything to go by, much of it is just unimaginative businesspeople who are afraid of missing out on what everyone else is selling.
At work we were instructed to shove ChatGPT into our systems about a month after it became a thing. It makes no sense in our system and many of us advised management it was irresponsible since it’s giving people advice of very sensitive matters without any guarantee that advice is any good. But no matter, we had to shove it in there, with small print to cover our asses. I bet no one even uses it, but sales can tell customers the product is “AI-driven”.
As I mentioned in another post, about the same topic:
Slapping the words “artificial intelligence” onto your product makes you look like those shady used cars salesmen: in the best hypothesis it’s misleading, in the worst it’s actually true but poorly done.
Market shows that investors are actively turned on by products that use AI
Market shows that the market buys into hype, not value.
Market shows that hype is a cycle and the AI hype is nearing its end.
How can you tell when the cycle is ending?
When one of two things happens:
- A new hype starts to replace it (can happen fast though!)
- The hype starts to specialize into subcategories of the hype (e.g. AI images, AI videos, AI text generation)
When “AI” hype dies down we are likely to see “AI” removed from various topics because enough people know and understand the hyped parent topic. It’ll just be “image generation”, “video generation”, “generated text”, etc.
There are different types of people in the market. The informed ones hate AI, and the uninformed love it. The informed ones tend to be the cornerstones of businesses, and the uninformed ones tend to be in charge.
So we have… All this. All this nonsense. All because of stupid managers.
It’s the new block chain or NFT hype, they think it’s magic.
But what if it actually is magic this time? Just this once!? And we miss the hype train?! (This is a sarcastic impression of real conversations I have had.)
Customers worry about what they can do with it, while investors and spectators and vendors worry about buzzwords. Customers determine demand.
Sadly what some of those customers want to do is to somehow improve their own business without thinking, and then they too care about buzzwords, that’s how the hype comes.
Prominent market investor arrested and charged for sexually assaulting AI robot
LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.
Often the answers are pretty good. But you never know if you got a good answer or a bad answer.
And the system doesn’t know either.
For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.
Accurate.
No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.
The worst for me was a fairly simple programming question. The class it used didn’t exist.
“You are correct, that class was removed in OLD version. Try this updated code instead.”
Gave another made up class name.
Repeated with a newer version number.
It knows what answers smell like, and the same with excuses. Unfortunately there’s no way of knowing whether it’s actually bullshit until you take a whiff of it yourself.
So instead of Prompt Engineer, the more accurate term should be AI Taste Tester?
From what I’ve seen you’ll need an iron stomach.
They really aren’t. Go ask about something in your area of expertise. At first glance, everything will look correct and in order, but the more you read the more it turns out to be complete bullshit. It’s good at getting broad strokes but the details are very often wrong.
Now imagine someone that doesn’t have your expertise reading that answer. They won’t recognize those details are wrong until it’s too late.
That is about the experience I have. I asked it for factual information in the field I work at. It didn’t gave correct answers. Or, it gave working protocols which were strange and would not be successful.
With proper framework, decent assertions are possible.
- It must cite the source and provide the quote, not just a summary.
- An adversarial review must be conducted
If that is done, the work on the human is very low.
That said, it’s STILL imperfect, but this is leagues better than one shot question and answer
Except LLMs don’t store sources.
They don’t even store sentences.
It’s all a stack of massive N-dimensional probability spaces roughly encoding the probabilities of certain tokens (which are mostly but not always words) appearing after groups of tokens in a certain order.
And all of that to just figure out “what’s the most likely next token”, an output which is then added to the input and fed into it again to get the next word and so on, producing sentences one word at a time.
Now, if you feed it as input a long, very precise sentence taken from a unique piece, maybe you’re luck and it will output the correct next word, but if you already have all that you don’t really need an LLM to give you the rest.
Maybe the “framework” you seek - which is quite akin to a indexer with a natural language interface - can be made with AI, but it’s not something you can do with LLMs because their structure is entirely unsuited for it.
The proper framework does, with data store, indexing and access functions.
The cutting edge work is absolutely using LLMs in post-rag pipelines.
Consumer grade chat interfaces def do not do this.
Edit if you worry about topics like context window, sentence splitting or source extraction, you aren’t using a best in class framework any more.
Sounds familiar. Citation please
This is because the AI of today is a shit sandwich that we’re being told is peanut butter and jelly.
For those who like to party: All the current “AI” technologies use statistics to approximate semantics. They can’t just be semantic, because we don’t know how meaning works or what gives rise to it. So the public is put off because they have an intuitive sense of the ruse.
As long as the mechanics of meaning remain a mystery, “AI” will be parlor tricks.
And I don’t mean to denigrate data science. It is important and powerful. And real machine intelligence may one day emerge from it (or data science may one day point the way). But data science just isn’t AI.
For me, if a company fails to make a clear cut case about why a product of theirs needs AI, I’m gonna assume they just want to misuse AI to cheaply deliver a mediocre product instead of putting in the necessary cost of manhours.
Cue Nicholas Cage face
YA DON’T SAY!!!
We’re seeing a bunch of promises made when LLM were the novel hot shit. Now that we’ve plateaued on how useful they are to the average consumer every AI product is just a beta test that will drop support as soon as something newer and shinier comes along.
They just don’t get it. Once everyone will use AI toilet and AI toothbrush they will sing a different tune.
I love skibidAI toilet
I definitely need a toilet that remember and analyze my shit. Yes.
Not sure what happened to it, but this was a thing already in 2005.
They will try to sell it to you as a way to detect any possible health issues early. But it will just be used to analyze you food patterns to shove mcdonalds ads
too bad I already eat mcdonalds all days
For some reason I imagine a toilet that automates a stool test and blood test and gives you a health report every month.
A stool test sure, but I’m not going to trust a toilet to use a sterile needle to draw blood.
If the toilet is receiving a blood sample I have bad news for your monthly health report.
I’ve been applying similar thinking to my job search. When I see AI listed in a job description, I immediately put the company into one of 3 categories:
- It is an AI company that may go out of business suddenly within the next few years leaving me unemployed and possibly without any severance.
- Management has drank the Kool-Aid and is hoping AI will drive their profit growth, which makes me question management competence. This also has a high likelihood of future job loss, but at least they might pay severance.
- The buzzword was tossed in to make the company look good to investors, but it is not highly relevant to their business. These companies get a partial pass for me.
A company in the first two categories would need to pay a lot to entice me and I would not value their equity offering. The third category is understandable, especially if the success of AI would threaten their business.
It’s because consumers aren’t the dumbasses these companies think they are and we all know that the AI being shoved into everything fucking sucks worse than the systems we had before “AI.”
In Defence of AI web search from my experiences:
When I have no idea what I am talking about, have no or incorrect terminology, I have found Copilot and GPT4 (separate not the all-in-one) to be game changing compared to flat Google.
I’m not using the data straight off the query result, but the links to the data that was provided in the result.
And embarrassingly, when I’m drunk and babbling into a microphone, Copilot finds the links to what I am looking for.
Now if you are just straight using the results and not researching the answers your mileage will vary.
Is that enough to mitigate how much worse bare Google is than it was ten years ago, back when they were winning against SEO bots? In my experience, it hasn’t been, but I’ve not done enough AI-aided web searches to have a good sample size.
No shit, because we all see that AI is just technospeak for “harvest all your info”.
Not to mention it’s usually dog shit out put
To be fair, I love my dog but he has the same output 🤷
But no one is investing billions into your dog’s shit, are they?
He’s open to investment 🤷
I refuse to use Facebook anymore, but my wife and others do. Apparently the search box is now a Meta AI box, and it pisses them every time. They want the original search back.
That’s another thing companies don’t seem to understand. A lot of them aren’t creating new products and services that use ai, but are removing the existing ones, that people use daily and enjoy, and forcing some ai alternative. Of course people are going to be pissed of!
We aren’t allowed new things. That might change their perfectly balanced money making machine.
And making search worse so it can pretend to be an ex is not what I or anyone is looking for in the search box.
Doubt the general consumer thinks that, in sure most of them are turned away because of the unreliability and how ham fisted most implementations are
+ a monthly service fee
for the price of a cup of coffee
Yes the cost is sending all of your data to the harvest, but what price can you put on having a virtual dumbass that is frequently wrong?
I wonder if we’ll start seeing these tech investor pump n’ dump patterns faster collectively, given how many has happened in such a short amount of time already.
Crypto, Internet of Things, Self Driving Cars, NFTs, now AI.
It feels like the futurism sheen has started to waver. When everything’s a major revolution inserted into every product, then isn’t, it gets exhausting.
I think that the dot com bubble is the closest, honestly. There can be some kind of useful products (mostly dealing with how we interact with a system, not actually trying to use AI to magically solve a problem; it is shit at that), but the hype is way too large
don’t forget Big Data
Internet of Things
This is very much not a hype and is very widely used. It’s not just smart bulbs and toasters. It’s burglar/fire alarms, HVAC monitoring, commercial building automation, access control, traffic infrastructure (cameras, signal lights), ATMs, emergency alerting (like how a 911 center dispatches a fire station, there are systems that can be connected to a jurisdiction’s network as a secondary path to traditional radio tones) and anything else not a computer or cell phone connected to the Internet. Now even some cars are part of the IoT realm. You are completely surrounded by IoT without even realizing it.
Huh, didn’t know that! I mainly mentioned it for the fact that it was crammed into products that didn’t need it, like fridges and toasters where it’s usually seen as superfluous, much like AI.
I would beg to differ. I thoroughly enjoy downloading various toasting regimines. Everyone knows that a piece of white bread toasts different than a slice of whole wheat. Now add sourdough home slice into the mix. It can get overwhelming quite quickly.
Don’t even get me started on English muffins.
With the toaster app I can keep all of my toasting regimines in one place, without having to wonder whether it’s going to toast my pop tart as though it were a hot pocket.
Bagels are a whole different set of data than bread. New bread toasts much more slowly than old bread.
I mean give the thing an USB interface so I can use an app to set timing presets instead of whatever UX nightmare it’d otherwise be and I’m in, nowadays it’s probably cheaper to throw in a MOSFET and tiny chip than it is to use a bimetallic strip, much fewer and less fickle parts and when you already have the capability to be programmable, why not use it. Connecting it to an actual network? Get out of here.
Yea I’m being a little facetious I hope it is coming through lol
TimeSquirrel made a good point about Internet of Things, but Crypto and Self Driving Cars are still booming too.
IMHO it’s a marketing problem. They’re major evolutions taking root over decades. I think AI will gradually become as useful as lasers.
It’s more of a macroeconomic issue. There’s too much investor money chasing too few good investments. Until our laws stop favoring the investor class, we’re going to keep getting more and more of these bubbles, regardless of what they are.
Yeah it’s just investment profit chasing from larger and larger bank accounts.
I’m waiting for one of these bubble pops to do lasting damage but with the amount of protections for specifically them and that money that can’t be afforded to be “lost” means it’s just everyone else that has to eat dirt.
I keep thinking about how Google has implemented it. It sums up my broader feelings pretty well. They jammed this half-baked “AI” product into the very fucking top of their search results. I can’t not see it there - its huge and takes up most of my phone’s screen after the search, but I always have to scroll down past it because it is wrong, like, pretty often, or misses important details. Even if it sounds right, because I’ve had it be wrong before I have to just check the other links anyway. All it has succeed at doing in practice is make me scroll down further before I get to my results (not unlike their ads, I might add). Like, if that’s “AI” it’s no fucking wonder people avoid it.