A fine Wednesday morning to you all. Welcome back to The Autopian’s morning news roundup, now featuring our upgraded commenting system. It’s the slickest and most high-tech way yet for you to tell me (incorrectly) that I’m wrong. Today, you can use it to weigh in on Elon Musk’s surprising take on artificial intelligence; Mazda’s more concrete plans about moving upmarket; the additional layoffs at Lucid; and how American cities are so mad at Hyundai they’re filing lawsuits. Let’s make it happen.
Musk, Wozniak, AI Researchers Say Slow Down
This year will likely be remembered as the one when the world started taking artificial intelligence more seriously. But the fact that we’ve gone from “What the hell is ChatGPT?” to “How can it replace all the writers at CNET?” in like, five minutes, is deeply concerning. After all, Big Tech has an extraordinarily poor track record when it comes to developing things that have real, tangible impacts on people’s health and safety. Just look at Theranos, or the apparent link between Facebook posts and genocide, or the many false promises and dangerous missteps around self-driving cars. (Why, just this past week, a Cruise test car plowed into a San Francisco bus. When do we see any benefits from this again?)
So the fact that even Elon Musk—whose Tesla Autopilot system has been roundly criticized for pushing the boundaries of safety—is among those signing a letter urging caution around AI development is really something. That letter came out today and it includes Musk, Apple co-founder and technologist Steve Wozniak, and numerous researchers. This is from The Verge:
The letter, published by the nonprofit Future of Life Institute, notes that AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems “that no one — not even their creators — can understand, predict, or reliably control.”
“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” says the letter. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Signatories include author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, politician Andrew Yang, and a number of well-known AI researchers and CEOs, including Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque. The full list of signatories can be seen here, though new names should be treated with caution as there are reports of names being added to the list as a joke (e.g. OpenAI CEO Sam Altman, an individual who is partly responsible for the current race dynamic in AI).
You can read the letter in full here. As that story notes, it’s unlikely to have a measurable impact on AI deployment because lots of companies are rushing this stuff to market either for their product viability or just to grab investor cash.
Now, what does this have to do with cars, you ask? The answer is a lot, down the line.
AI is considered by many in the auto industry to be crucial to the software-focused features and developments planned for future cars, including virtual personal assistants, customizing the driving experience, manufacturing, connected cars and, most notably, autonomous cars. The lack of AI sophistication is the reason driverless cars are hitting walls lately (sometimes literally, like when a Tesla self-parks inside an Arby’s yet again.)
Automakers across the board are making huge pushes into software, including developing whole divisions like Volkswagen’s CARIAD and General Motors’ Ultifi, and you had better believe they’re all making big plans for AI in cars. Hearing caution from Musk, of all people, says a lot here.
I hope companies take heed, but I’m not especially optimistic. Capitalism gonna capitalism.
Mazda’s Premium American Push Comes Into Focus
I’m a Mazda fan. I’ll admit it. And not just for rotary engines or the 787B, but for the current stuff. These days your humble Editor-at-Large mostly drives a 2018 Mazda 3 hatchback. It’s not the fastest car I’ve ever owned, but it is the nicest; dead reliable, great on gas, plush inside (I have a Grand Touring so it’s got some nice stuff) and a shockingly good handler for its class. Cars like that are why I want to see the brand be successful.
But as we all know, it’s tough out there for small players like Mazda, especially with the demands of electrification—not to mention all the tech stuff I just mentioned above. Still, Mazda has new leadership as of this year and it has big plans for its most important market, America. The plan is crossovers, luxury and electrification. Here’s Tom Donnelly, the new CEO of Mazda North American Operations, talking to Automotive News:
First on the to-do list: launching two new crossovers that are vital to the brand’s move upmarket — the CX-90 and CX-70 — designed specifically for U.S. consumers.
Donnelly expects that CX-90 will log sales in the 90,000 range, eventually equal to “three to potentially four times the volume increase from the outgoing model.” In 2022, Mazda sales of CX-9 topped 34,500, according to the Automotive News Research & Data Center.
[…] Donnelly said the CX-90 and CX-70 represent the first part of Mazda’s “multiphase plan,” which is the introduction of its plug-in hybrid technology.
“Phase two will be an electrified model in the next year or two, and phase three will be a full lineup of vehicles that have electrification available by around 2030,” Donnelly said, clarifying that Mazda will not be all-electric by that target but will offer all-electric powertrains across every vehicle in its lineup.
More Lucid Layoffs
It’s not just hard for the smaller legacy automakers. It’s hard for everyone. But I think the startups are having an especially rough time right now. Rivian is facing significant headwinds and some of the really upstart players like Canoo may not make it through the year. Now Lucid, which has been struggling to keep demand for its cars up, is cutting nearly 20% of its workforce, Reuters reports:
The maker of Air luxury sedan last month forecast 2023 production that fell well short of analysts’ expectations and reported a major drop in orders during the fourth quarter.
The company plans to communicate with all its employees over the next three days about the plan, CEO Peter Rawlinson said in a letter, adding its U.S. workforce will see reductions in nearly every organization and level, including executives.
Lucid, which had about 7,200 employees at the end of last year, will incur between $24 million and $30 million in related charges. The company expects to substantially complete the restructuring plan by the end of the second quarter.
“We are also taking continued steps to manage our costs by reviewing all non-critical spending at this time,” Rawlinson said.
Everyone Is Mad At Hyundai And Kia
You remember those rampant Hyundai and Kia thefts recently where countless cars got Gone in 60 Seconds-‘d with humble USB cables? Hell, you may have even lost your own car that way. You certainly wouldn’t be alone there. While the Hyundai Motor Group has issued a fix for the problem, it’s about to face significant legal headwinds over the problem.
Reuters reports the city of St. Louis is suing the automaker, and in doing so it joins a bunch of other cities doing the same:
The lawsuit filed in U.S. District Court in Missouri follows similar actions taken by several U.S. cities to address increasing Hyundai and Kia thefts that use a method popularized on TikTok and other social media channels. Other cities suing Kia and Hyundai include Cleveland, Ohio; San Diego, California; Milwaukee, Wisconsin; Columbus, Ohio; and Seattle.
“Big corporations like Kia and Hyundai must be held accountable for endangering our residents and putting profit over people,” said St. Louis Mayor Tishaura Jones.
Kia and Hyundai vehicles represent a large share of stolen cars in multiple U.S. cities, according to data from police and state officials. Many Hyundai and Kia vehicles have no electronic immobilizers, which prevent break-ins and bypassing the ignition.
The automaker responded by saying the lawsuits are “without merit.” That probably came from a spokesperson whose Sonata was stolen mere moments later by a group of teens who made a TikTok dance video while they did it.
Anyway, if you own one of these cars, make sure you’re getting it fixed ASAP.
Your Turn
What’s your take on all this AI stuff, especially when it comes to cars? The people I talk to regularly in the industry do seem to think it’s crucial to delivering the tech-focused future they want. Now, I’m not convinced all of that stuff (especially the subscription thing) is what car owners want, but I think we’re deluding ourselves if we believe this is all going away anytime soon.
- The Red Bull F1 Team, Rivian, Me: Who Made The Biggest Boneheaded Car-Mistake?
- General Motors Figured Out How To Make A Great Diesel Car Engine Just To Kill It Too Soon
- The Future Of The Auto Industry Is Electric, With A Gasoline Backup
- I’m Attending My First Ever Formula 1 Race And I Have No Idea What To Expect
“Still waiting to see the benefits of VoIP (which is the direct cause of the flood of robocalls and spoofed caller ID, “
If you don’t see the benefits of VoIP, then it’s clear that you have exactly zero experience on the telecom side. I also have decades of IT/Tech work experience. I have seen first hand how VoIP is beneficial. I’ll give you a hint… it’s all about reducing network cost and making things easier to manage.
The move to VoIP had nothing to do with spoofed numbers and robocalls. And both of those technically existed even in the pre-VoIP days.
“Well, there goes any hope I had for Tom Donnelly being a good leader for Mazda. “
I agree with you on that. They might as well just make their business plan “thoughts and prayers”.
I might also mention: if you love paying long distance and out of area charges, you will hate VoIP. If you like Erlang number switch limitation, you hate VoIP. If you love waiting months for adds, moves and changes, you hate VoIP. The rest of us are sane.
How I miss the good old days of having to ration calls home to the UK from Japan to once a month, and justifying the cost to myself as a tax on having chosen to live abroad.
Bullshit. Small providers could spoof and there’s a lot of security issues running around on SS7. I was invited to join a security startup that found a bunch of them. No freakin thank you. And Microcom was better than MICA you n00b.
Maybe posting novels on somebody else’s website getting into fights with strangers when they call you out on bs isn’t the right thing for your mood.
Regarding slowing down AI development… I fail to see how that would or could be enforceable.
Regarding Mazda… Personally I’m not too optimistic about the path they are taking. I think they’ll run into the same problems that VAG had when they stupidly decided that the VW brand would be ‘luxury’… but with less product/market overlap than VAG had.
And this business of them fucking around with a rotary generator… I predict it will be an expensive dead-end for them. And not having a proper/competitive BEV or BEV platform will kill them in the long run unless they get their ass in gear on that ASAP.
And I don’t see that happening.
And my take on AI as it relates to cars? I think it might become useful when it comes to self-driving. I think it could also be useful to make the system of traffic lights and other road infrastructure more intelligent.
Any time a 3 letter acronym claims that the plan will increase sales by 3-4 times your previous long term average, be wary. Be very wary
So, um… you’re right because you’ve been in this a long time? Uh huh. So have I. I’ve run Baudot teletype over VHF for deity’s sake. While it might not meet your definition of AI (Do you have a definition for “self aware” that we haven’t already blown past? The Turing test isn’t working anymore.) I have a simple test: If it can recognize a picture of Sara Connor, it’s ML. If it can recognize a picture of Sara Connor, decide it’s a threat, start the process of trying to locate her, and vector kill bots, it’s AI. I don’t give a rats *ss if you don’t like how it works. We’re on the verge of seeing this, and smart folks are concerned.
Somehow AI doesn’t get tailights as well as a certain writer here
Car tail lights are a critical component of any vehicle. They serve a variety of purposes, including indicating to other drivers that the vehicle is slowing down or stopping, warning drivers behind the vehicle of potential hazards, and improving visibility during low-light conditions. But beyond their functional purposes, tail lights have also become an important aspect of automotive design. Today, there are a wide variety of tail lights available, each with its unique style and functionality. In this article, we will take an in-depth look at the different varieties of car tail lights available today.
Halogen tail lights are the most common type of tail lights found on modern vehicles. They are cost-effective, easy to produce, and provide a good balance of brightness and energy efficiency. These tail lights are made up of a filament enclosed in a halogen gas-filled bulb. When the filament is heated, it produces light. Halogen tail lights are typically less bright than other types of tail lights, but they are more energy-efficient.
LED (Light Emitting Diode) tail lights are rapidly becoming the standard in automotive lighting. They are much brighter and more energy-efficient than halogen lights, and they last much longer. LED lights also offer more flexibility in design than halogen lights. They can be arranged in different shapes and sizes, allowing automakers to create more unique and distinctive tail light designs. LED tail lights are also available in a range of colors, making them a popular choice among car enthusiasts who want to customize the appearance of their vehicles.
OLED (Organic Light Emitting Diode) tail lights are a relatively new technology in the automotive industry. They use a layer of organic material that emits light when an electric current is applied. OLED tail lights are even more energy-efficient than LED lights, and they offer even more design flexibility. They can be arranged in very thin layers, allowing for much more intricate designs than are possible with other types of tail lights. OLED lights also provide more even illumination than other types of tail lights, making them a popular choice for high-end luxury vehicles.
Xenon tail lights use xenon gas to create a bright, white light. They are much brighter than halogen lights, but they are also more expensive. Xenon lights are typically found on high-end luxury vehicles, and they are often used in conjunction with LED lights for a more dramatic lighting effect.
Fiber optic tail lights use strands of glass or plastic fiber to transmit light from a central source to the tail lights. This allows for a more flexible design than other types of tail lights, as the fibers can be arranged in any shape or pattern. Fiber optic tail lights are typically found on high-end luxury vehicles, and they are often used in conjunction with LED or xenon lights for a more dramatic lighting effect.
Dynamic tail lights are a relatively new technology that allows the tail lights to change their appearance depending on the driving conditions. For example, the tail lights may increase in brightness when the driver brakes hard, or they may blink rapidly when the driver turns on the hazard lights. Dynamic tail lights are typically found on high-end luxury vehicles, and they are often used in conjunction with other types of tail lights for a more dynamic and engaging lighting effect.
Sequential tail lights are a type of dynamic tail light that uses a series of LEDs to create a scrolling effect when the driver turns on the turn signal. The LEDs light up in sequence, giving the appearance that the tail light is scrolling in the direction of the turn. Sequential tail lights were first introduced on the Ford Thunderbird in the 1960s, and they have since become a popular feature
Elon has the insight and mental stability of a weasel. And the credibility of a Trump Family member.
COTD
I think he invested early on — a few years ago when OpenAI was entirely open. He had been vocal in the concerns about the dangers of AGI for at least the last decade, and (since Congress didn’t act on the concerns) figured a non-corporate AGI might result in a benevolent one that could counter the closed systems being developed. (The narrow AI in FSD wasn’t intended as AGI, so not an existential risk.)
Everyone jumps at the latest tech revolution because they’re afraid it will end up being the next internet or Google, and if they don’t get in now they’ll get left behind. Sooner or later they’ll realize that it’s just not all its cracked up to be for one reason or another, and it’ll fade back to a more realistic scenario.
We saw this with voice assistants (Alexa, Siri, etc.), autonomous cars, and this AI hype is the next in line.
I dunno, writers, programmers, and artists are already feeling the pain of corporations trying to replace them with AI. Even engineers are under threat. That’s what makes this technology more concerning than the internet or google, this time the creative jobs are under threat. Arguably some of the most aspirational jobs for a lot of people, things people love doing and were happy to make a living doing. If this continues, there will be fewer creative jobs and more manual labor jobs, and that doesn’t sound like the utopian future I hope for.
They’re trying to, but I don’t think it will last. With some exceptions, the quality of content being generated is not on par with what humans are producing. Much like the fact that autonomous cars can’t deal with every situation that gets thrown at them, I think that the limitations of AI generated content will become apparent sooner rather than later, and, like with those cars, the problems will be a tougher nut to crack than it currently appears. Making robots do one thing repeatedly is easy. Making them compensate properly for variable and unexpected input is hard. Really hard.
And much like robots that have already taken over manual, repetitive jobs, humans are still required to program them, maintain them, and compensate when they make mistakes. AI makes mistakes, too.
The problem is that we want quality content.
The people with the cash and the potential to do the hiring for said quality content? Yeah, I’m seeing very little of that actually get posted, and goodness knows I’ve been looking for months.
I’m convinced that many of them just don’t care. Not about the readers, and certainly not about the nuisance that is keeping anyone employed. They’ll just clog the page with more ads and sell on the publication to the next vulture capital sucker. I don’t stand a chance. I’ve started looking harder back in tech comms because dammit, at this point, I just want a parsh and enough leftover to take said parsh to the track.
Can confirm from SXSW rolling through town: “AI everything” is this year’s crypto/NFT grift.
I’m going to disagree here. I work in legal; document review, specifically. The current model is to have a team of lawyers literally read all the potentially relevant documents, then summarize perhaps millions of docs. It’s expensive and costs clients literally millions of dollars to produce the records as required to opposing and gain understanding of what happened, when it happened and who was involved.
Guess what ChatGPT is really good at? Reading literally millions of documents and being able to answer questions about it. So instead of waiting a month for the review to finish so you can answer questions about whether something happened and who knew about it, you ask your robot and it points you to records you need to see.
In legal, secretaries went away when databases and word processing lawyers could manage appeared. Now? Paralegals are going to become defunct. The fleets of attorneys that used to do nothing but review documents for litigation? They’re not going to go away entirely, but the labor required will drop dramatically.
Generative AI that you can point at a collection of records is going to revolutionize litigation.
I didn’t mean to imply that it would be completely useless. All those things I mentioned have successful use cases for them, but they aren’t the panacea that they were made out to be when they came on the scene.
I’m rooting for Mazda. They’ve always marched to the beat of their own drum and they obviously gave us one of the greatest driver’s cars of all time. That being said, I do see their push upmarket as being quite a gamble. The CX-90 tops out around $70,000, which is a tough place to be playing ball.
The German and other Japanese competition is stiff there, and publications are continuously having to change their boxers over everything Genesis puts out (I obviously like Hyundai and Kia but even I find the slobber fest excessive). The lower spec trims of the CX-90 are also appealing, but at that price point they’re going up against the tour de force Telluride/Palisade twins, the new Pilot which is getting rave reviews, etc.
As I’ve said a few times, I do think there’s going to come a point when the SUV/crossover market reaches critical saturation and people eventually want something different. I get that SUVs are where the big bucks lie, but it still feels very here and now to me…plus the average price of a new car is already about to top $50,000. Corporate greed knows no bounds, but regular people just can’t afford new cars anymore, and that’s going to become a problem.
Maybe this is me being selfish, but I think Mazda needs another car if they intend to succeed in pushing upmarket. The Japanese and German luxury sedans still sell fairly well all things considered. I think if Mazda could find a way to get the new straight 6 into a sedan and have it start in the high 30s/low 40s they may be on to something…especially considering the other Japanese luxury sedans are more comfort than driving oriented.
All this being said, the CX90 commercials are some of the corniest shit I’ve seen in a long time. I think there are some language/cultural barriers there that they aren’t navigating very well. They definitely don’t make me want to buy the car…
Interesting, Musk sounds like one of the joke signatories. Capitalism does not often reward restraint or patience.
Musk’s take is not all that surprising. There’s quite a difference between computer-aided life quality improvements and full iRobot
Musk has been sounding the alarm about AI for years. I remember reading this article from Jalopnik back in 2017. https://jalopnik.com/even-elon-musk-thinks-the-robots-are-going-to-kill-us-a-1797242095
Let’s beat the dead horse one more time – the same people who don’t understand “a series of tubes” are going to regulate industries that are falling over themselves to stuff the political wallets with 100% totally legitimate not bribe money?
It’s going to great for everyone.
Patrick, you perfectly demonstrate in this article why I think self driving cars will never be a thing. How many people driving cars hit buses this week? Every time a Tesla on autopilot is in a crash it makes news. When a teen smashes into another car head on because they were texting, not so much. No doubt, AI in cars is still has a way to go, but the idea that dumb ass humans are categorically safer drivers than computers is ridiculous. Computers don’t drink and drive. Computers reflexes don’t slow as they age. Computers aren’t posting on social media when they should have their eyes on the road. If allowed to develop, I have no doubt that AI would make the roads safer than we are now. I also have no doubt that it will not happen in my lifetime because of human bias and the transfer of legal liability to corporations from individuals. Self driving cars are this generation’s flying car, always only 10 years away.
I’m in my late 30s at this point, and I don’t think full self-driving or true 100% autonomy will happen in my lifetime. What we’ll get is better and better cruise control, essentially. And as a fan of human driving, for all its many flaws, I’m find with that over handing my steering wheel over to a robot for good.
I don’t want to turn over my steering wheel either, but I think I am probably a better driver than most. However, I doubt my evaluation is objective. Perhaps dramatic improvement in AI assisted safety systems is the answer. It’s not that I don’t think humans have the potential to be safer than computers, we just don’t generally do it.
Side note: The new commenting system absolutely rocks! I think this will dramatically improve the sense of community here.
“Computers reflexes don’t slow as they age.” Yes they absolutely do… Old computers run slower than they did when new. Industrial machinery processors are swapped out after a certain life span because the parameters of their function become too broad and they can hurt someone as well as stamp out an inferior product
“Old computers run slower than they did when new.”
Computer hardware will run exactly the same regardless of age. Software unless someone modifies the code and adds something, also will run the same regardless of age. If data is being written to nonvolatile memory that causes fragmentation and slower reads, then maybe it will take longer to startup, but code running in RAM will run the same. I’ve run benchmarks on 10+ year old hardware in the past and the results were the same as day 1.
In your case it sounds like the software is changing therefore hardware needs to be upgraded to keep up the the additional complexity. Or the hardware is failing due to an unsuitable environment.
He divested that years ago.
So, a self-driving car using AI would be able to find a fire engine hidden in the glaring light of a sunset so a collision is avoided? Right now, I’m using Bard by Google and when I asked, it gave me a discussion about whether Ginger or Mary Ann. It didn’t take sides. I think we’re a way off before AI helps us drive.
So far I’ve asked bard to play tic-tac-toe against itself, if it thinks a heist is a crimes, and it’s opinions on the three laws. o.O
So did you vote?
Ginger vs Mary Ann is the real Turing Test.
It’s all fine and good to recommend a moratorium on corporate development of AI, but the most dangerous use of AI is by DARPA and other government agencies for the purposes of war. How are we going to put a stop to and even audit THAT when we live under a government that lies about everything and does whatever it wants without public scrutiny or any concern for what the public’s wishes are?
On the subject of Mazda, it should have a line-up on PHEVs. A rotary-powered PHEV Miata with 30 miles all-electric range would be interesting, and probably wouldn’t raise the weight more than 200 lbs.
I suspect Lucid and Canoo won’t be around in 2 years. Existing regulations are written in just such a way to keep startups from competing with the established industry. This is on purpose, as lobbyists working for the established corporations wrote the regulations that exist today.
I live in St. Louis. I think that lawsuit against Hyundai is entirely without merit. I also have a co-worker who lives in Seattle whose new Hyundai was a victim of this type of theft.
That deal went sour quite some time ago: https://news.yahoo.com/elon-musk-reportedly-furious-chatgpts-155348524.html
tl;dr—He wanted to lead the company, ChatGPT wouldn’t let him, Elon left the picture, and lo and behold, ChatGPT’s doing pretty well (as a corporate concern, that is—its damn chatbot still incorrectly suggests putting beans in chili).
They’ve been beefing ever since, so honestly, it makes sense that Elon would want to kneecap OpenAI from going further than its current top-of-the-line product. He’s a petty, petty man, and not even having a kid with an OpenAI exec seems to have tempered the pettiness.
I think this letter is more notable for who all else signed it, honestly. The signatures may be too easy to game, but the others who have less open beef with existing companies are a big deal.
Well, stopped clock and all that. Musk isn’t an idiot, he’s just erratic
I don’t have much faith in AI as it sits. It’s a fun toy, but that’s it. I wouldn’t trust it with my life in a car unless I’m heavily monitoring how it’s driving and can take back over easily, without struggle (and even then, using cruise control still makes me drowsy), nor would I trust it for accurate information on the internet.
Case in point: I asked ChatGPT to tell me how to make chili. THIS is what it spat back:
Tomatoes are contentious enough, but beans? BEANS? Everyone on a planet who knows their brains from their butts knows that chili does not, will never, and should never contain beans. Ever.
Just straight-up misinformation. SMH.
It also routinely gets details about Puffalumps wrong, so it’s not just pulling from yankee sources who don’t know the difference between a bean soup and a chili. I guess these are the “hallucinations” they speak of that sound convincing enough, but aren’t actually correct. Also, anything creative it tries to pull off is just…bland. There’s no voice. I guess it pulled its inspiration from too many Generic Car Man articles that further bland up the press releases, given that there are a ton of those out there. Anyone putting their trust into AI over a human gets what they pay for, I guess.
Preach the chili, man. I’m with you!
I’m a Texan and I put all the beans in my chili. I use ground lamb instead of ground beef though and this recipe forgot the coriander and cayenne pepper. Also I assume “salt and pepper to taste” was supposed to be “shake some TexJoy over it”. Ah computers with no taste buds, what can ya do?
I assume this is a nice lamb-and-bean stew, but I’m sorry, the addition of beans makes it no longer chili.
Booooo!!!
Hank Hill would pity you.
The Chili Appreciation Society International specified in 1999 that, among other things, cooks are forbidden to include beans in the preparation of chili for official competition.
http://www.chili.org/rules.html
Next you’re going to tell me I shouldn’t be serving it over Doguet’s and topping it with cheese, sour cream, and HEB corn chips?
If it wanted you to put beans in your chili, it clearly wasn’t programmed by a Texan.
IT’S OUR DISH!!! At least pull that data from one of us, SMH.
For a minute, I thought this was Drew Magary’s Chili Recipe.
Without the beans, how are you gonna build up the gas in your gut to rip out award-winning farts?
Sheer talent. Beans are a crutch.
I see you having fun with your chili rant, but when about half the people disagree with your basic premise, it defeats the point you’re trying to make, but does make an incredibly important point about AI itself.
Recipes that refer to the prepared dish as “chili” often have beans. The fact that ChatGPT provided a bean chili even suggests that the bean version is the more popular version, because AI learned it from what’s popular on the internet. If you want to be pedantic, you could try insisting that the bean-containing versions must be called something else, but good luck with that battle.
Language changes. Chili is whatever most people understand it to be. The fact that you consider tomato as objectionable but still a possible part of the recipe is proof that playing defense to restrict the definition of a word takes immense energy, and is almost always a losing game.
And this is exactly what the human role will be in the future, pitted against AI. The overwhelming push of compiled data will certainly be too much to control in any meaningful way. While the AIs define the game, the pieces, the playing board, the rules and the objectives, we will be also there, too busy arguing about the color of the box to address any meaningful concerns.
I hope we always have a sane hand on an undefeatable kill switch, but doubt that the optimists in charge of developing it believe that is a necessary part of every AI project.
As long as the AI pronounces gif with a soft-G then it can put whatever it wants in its chili.
Many people (notably prominent, out-of-state publications who don’t know any better but do rank high on Google) getting it wrong are still getting it wrong. The NYT put peas in guacamole for some reason, for Pete’s sake.
This is *our* dish, here, in Texas, and the bean people—while popular at times—are incorrect. Bean-containing versions SHOULD be called something else, and I will die on this hill.
To me, it’s a perfect illustration of the shortcomings of “AI,” as it sits with these plagiarism-prone text/speech/image models. A human could’ve perhaps researched it further and noted the cultural nuance at play, or even taken note where the requesting party was from. A chatbot’s just gonna send it.
Just keep hitting the regenerate response until there are no beans.
Everyone around here knows the best way to eat chili is over spaghetti. With a mound of shredded cheddar cheese. Beans and onions are optional, but I gotta have a little mustard on it, too.
<calmly awaits the coming ragestorm>
I thought me being a Texan that admits I like beans in my chili was controversial, but wow, just wow.
the Bad Skyline is at it again
My insurance company The Hartford asked me if my Sonata was a key start. Noon. I thought they are going to be insuring those very soon.
I want an MX-30 badly. But yeah it would have to be with the RE. But who in their right mind doesn’t want a rotary?! I was all in for a 23 Prius Prime, but I am now seriously looking at the Mazda instead.
When all the smartest people in the room are getting nervous and saying slow down, you should probably listen.
Of course, we won’t.
This is the moment when we need a strong, unified international body to deal with such things. The UN definitely ain’t that. Just look how well we’ve managed to regulate global emissions and virus research and gene editing; there’s nothing to be afraid of…
All the smartest people, plus Elon Musk. That’s where it all falls apart for me.
I can’t argue with this, but what are they going to do, tell him he can’t sign it?
Every time I see that moron’s face it just screams d-bag.
The only reason Mazda was actually able to “go premium” is because of the pandemic gouging “inflation” and poor planning “supply chain” issues.
CX-90 is ten times CX-9, so why don’t they want to sell 10x as much? It’s CX-90, not CX-36 😛
Bring back the Mazdaspeed 3. Bring back Mazdaspeed in general.
Give us a sedan with the new straight 6 you cowards
I have thought for a while that conflating the term “AI” with an advanced language or image model is pretty disingenuous, especially considering what most people have been culturally conditioned to believe AI is.
It’s undoubtedly cool that software can use the entire internet as its inputs to write books or poems or create art in milliseconds, but it’s a pretty big leap from that capability to self-driving cars.
Same, ChatGPT is advanced predictive text but not quite full AI. Parroting the entire internet is impressive but we’re not quite at Skynet…yet.
It’s not much of a leap at all to give it decision-making abilities and then to program in control of certain systems based on what it comes up with.
I’m not necessarily thinking of self-driving cars. People are always trying to make money – I’m sure there are already plenty trying to figure out how to use AIs to analyze markets to somehow take advantage and profit. That greed will lead to a desire for speed, which will lead to giving control over to these systems to make rapid financial decisions based on gathered data and internal calculations.
There are so many ways this could go so wrong.
Greed, malice, anarchy – there are plenty of reasons people will give these systems way more contol than they should ever be given.
Turning over decision-making responsibilities to a machine without first considering every possible outcome the machine can take will always be dangerous and in that respect I agree completely with this article. I think the issue v10omous and I are having is with referring to ChatGPT as AI.
Responding quasi-naturally in chat doesn’t make it AI. When it attempts to speak authoritatively on a subject experts in that field are pretty quick to point out the flaws, so it doesn’t pass Turing’s test.
Having the ability to make decisions doesn’t make it AI either. The cruise control on your car (old dumb cruise, not modern adaptive cruise) makes decisions regarding throttle position using a PID control loop and it does it quite well despite a total and complete lack of AI. Adaptive Cruise is fairly dumb as well if it’s just told to maintain a set distance from the object in front of it. True level 4 or 5 self driving might come closer to AI but even then I suspect it will mostly be relying on feedback loops.
This isn’t new though, folks have been throwing the “AI” term around for the past decade without any idea what it means.
Isn’t intelligence itself simply a feedback loop, and the degree of intelligence merely a measure of how many inputs are correctly accounted for?
That’s a question I’ve been asking myself lately.
I think you can consider knowledge vs. intelligence.
LLMs are accumulated human knowledge, from whatever sources are fed into it. One that feeds only on Wikipedia will have a different knowledge than one that is fed only VWvortex. “General knowledge” can often be dead wrong.
Intelligence implies some kind of discernment. Can a LLM discern what is true, even if falsehoods are repeated more often in its sampling?
You may have have run across this recently, as I have. Some percentage of people thought that this last time we changed the clocks here in the States would be the last time for that.
There was a lot of talk about it in past months, because the Sunshine Protection Act passed the Senate on a technicality, but it quickly stalled in the House. No real change happened, but a good number of people were under the impression it had. They may have heard someone repeat this mistake, and taken it at face value, but it’s easy for anyone, with intelligence, to seek out an authoritative source and determine the truth.
You might say an LLM could be told what the authoritative sources are, but who gets to decide that?
Oh man, what is intelligence? Is it self-awareness? Is it self-preservation? Who knows? You start asking these questions and next thing you know you’re being forced to take your friend’s arm off then whispering apologies in their ears while activating their off switch. That’s not a path anyone wants to take.
So I guess it’s just a tumor?
I’ve started to habitually replace the letters “AI” with “ML” when I read any articles about tech because that’s almost always what they actually mean, but AI has more cultural weight so that’s what the marketers are pushing.