Nah, if it were an AI computing company, FSD would be completed in, ahh, 3 months. 6 months definitely.
I've been following all the large tech AI developments. So far Apple is the only one with a big LLM that hasn't put anything out yet. Even Meta has a good LLM out now. Elon isn't the type of personality to not show off a large neural net if they had a big one. He showed off his bicep curls in a short video on Twitter the other month. lol If Tesla had a large neural net he would have made predictions of AGI or ASI(artifical superintelligence) by 2028 or whatever.
I've been following all the large tech AI developments. So far Apple is the only one with a big LLM that hasn't put anything out yet. Even Meta has a good LLM out now. Elon isn't the type of personality to not show off a large neural net if they had a big one. He showed off his bicep curls in a short video on Twitter the other month. lol If Tesla had a large neural net he would have made predictions of AGI or ASI(artifical superintelligence) by 2028 or whatever.
Maybe he is, in reality, really smart and is holding back info until he can make friends with his favorite autocratic dictator in his quest to reign supreme over us all.
When will you be revising your price target to $0?
Don't think Mercedes Lvl 3 will be the crack that breaks Tesla. If you read the announcement, there are some caveats:
The company has confirmed that its Level 3 hands-free driving service, called Drive Pilot, will be offered as a $2,500 annual subscription for 2024 EQS sedan and 2024 S-Class models through dealers in Nevada and California. Those states approved it in January and June, respectively.
Using Drive Pilot takes two steps. First, the car must determine when the correct operating conditions are met: The vehicle must be on a highway with a solid center divider and at least two lanes of traffic on either side, and the surface must be dry during daylight.
The radars and cameras can't read the road in rain, snow, ice or darkness.
Still, the system is restrictive. The 40 mph limit and road restrictions mean it will work only in a certain type of traffic-familiar to Los Angeles drivers but potentially less useful to those elsewhere. It won't change lanes by itself, and it won't operate on surface streets, which are more complex and volatile than highways.
(That will come in future years, according to Mercedes engineers.)
When I recline my seat a bit-I'm getting comfortable-a warning appears on the driving display to alert me that settling into a nap isn't permitted. Fair enough. When I unclip my seatbelt, a further warning tells me to restore it. And when I muse about driving it to Mexico, the Mercedes engineer riding shotgun says Level 3 would deactivate once I cross beyond the California state border. (The same applies to Nevada.)
It's a nice step forward, but think the number of sales will be small and its utility seems limited. Not ready to revise my price target to $0 based on this crack.
Quote:
Originally Posted by Jupiter0
I've been following all the large tech AI developments. So far Apple is the only one with a big LLM that hasn't put anything out yet. Even Meta has a good LLM out now. Elon isn't the type of personality to not show off a large neural net if they had a big one. He showed off his bicep curls in a short video on Twitter the other month. lol If Tesla had a large neural net he would have made predictions of AGI or ASI(artifical superintelligence) by 2028 or whatever.
Tesla has extremly large neural networks running offline in the autolabel pipeline. They also have a rather large neural network running online in the cars. V12 goes from many smaller networks to one end2end network, so it will likely be even larger, details are unknown. A large neural network tends to perform better but is not a goal in itself, it comes with a cost of runtime.
The recent Optimus demo showcased a single end2end neural network running video in to robot control out that can be prompted to do different tasks. In the video the show the task of sorting blocks, unsorting blocks, yoga demo, sensor calibration etc running the same end2end neural network. My guess is that the prompting will be run with an LLM.
Don't
Tesla has extremly large neural networks running offline in the autolabel pipeline. They also have a rather large neural network running online in the cars.
Yeah I figure they have some kind of GPU cluster to do the car vision but how big is it really? It seems like it would be a much smaller different type of model than the AI LLMs alluded to. Not a natural language processing LLM. That's why Elon talked on X about starting up an LLM for X Corp and buying H100's. Meta has like 16k GPU's and OpenAI must now have that or over. Sam Altman said GPU scarcity prevented full multi modal earlier this year. They now are rolling out multi-modal so they prob have way more now.
The Tesla humanoid robots remind me a lot of Sanctuary robots.
Yeah I figure they have some kind of GPU cluster to do the car vision but how big is it really? It seems like it would be a much smaller different type of model than the AI LLMs alluded to. Not a natural language processing LLM. That's why Elon talked on X about starting up an LLM for X Corp and buying H100's.
The Tesla humanoid robots remind me a lot of Sanctuary robots.
Dojo v1 was optimized for running and training video input neural networks. Dojo v2 has been upgraded to perform LLM better. Tesla recently got an additional 10k H100 GPUs, worth about $300M in addition to their previous A100 and Dojo clusters.
At Autonomy day 2022 they said they had 14k GPUs, 4k are used for autolabel and 10k to train their inference neural networks.
X.ai will team up with Tesla and share some resources and results. There are many large datasets that open source that Tesla can use and maybe twitter will be another interesting source of data to consider adding. I think llama2 or equivalent has a sufficient enough understanding of language to understand the difference between "sort the blocks" and "unsort the blocks" so I don't think X dataset will be the deciding factor.
Regarding sanctury.ai:
If you read the description it says: #RobotsDoingStuff #PhoenixRobot #Humanoid #GeneralPurposeRobot #TeleOperation.
There is a bit of a gap between running teleoperation and running and end2end neural network. Hopefully they can use that data as a way to quickly move towards neural network control, and it's a large and difficult problem to go from a demo to large scale production at a profit.
It's nice that there will be some competition coming for Optimus, it brings some alpha to the stock.
Are you sure those robots in the Tesla vid aren't remote controlled too? Seems hard to verify. I'm actually really optimistic on self-driving from some company with Google and now OpenAI going image multi-modal.
Are you sure those robots in the Tesla vid aren't remote controlled too? Seems hard to verify. I'm actually really optimistic on self-driving from some company with Google and now OpenAI going image multi-modal.
Not 100% sure, but I find it very unlikely. They have many engineers who would be very uncomfortable with lying and it might be illegal. There are many project doing similar things so it's not so unlikely that any company would be able to do it. Btw I liked the book Superforecasting for how to judge these kind of things.
They were teleoperated in the previous video and there they also showed the first time the robot executed a task autonomously:
Don't think going multimodal will help them solve self driving. Maybe it will help them actually read complex signs, but getting that to work reliably will be a pretty huge step.
I've been following all the large tech AI developments. So far Apple is the only one with a big LLM that hasn't put anything out yet. Even Meta has a good LLM out now. Elon isn't the type of personality to not show off a large neural net if they had a big one. He showed off his bicep curls in a short video on Twitter the other month. lol If Tesla had a large neural net he would have made predictions of AGI or ASI(artifical superintelligence) by 2028 or whatever.
The powerful AI model isn't out yet for Apple. Eventually you will be able to show Siri a picture of anything and it will help you solve problems with it. OpenAI just enabled that with GPT4. Siri is the old model.
The powerful AI model isn't out yet for Apple. Eventually you will be able to show Siri a picture of anything and it will help you solve problems with it. OpenAI just enabled that with GPT4. Siri is the old model.
Okay, I see. You're expecting the Ajax project to be launched stand alone as a competitor to Siri? Apple has used machine learning or 'AI' for many years in a variety of applications including their voice assistant.
Is it just me or are all the robots in the Tesla videos 100% CGI?
I've watched both videos and I'm like 99.9% sure both videos are CGI render
It's a nice compliment to them that people believe their videos are CGI. It says 1.5x in the beginning but most people don't notice the text, making it a bit uncanny.
I think your 99.9% is way too high. Give it a few more months and you might be able to be in the audience seeing it yourself, maybe then you can feedback your prediction and update your world model.
well, we now all know (from internal documents) that their 2019 fsd video was basically a fake. and, as we all know, fsd has been going nowhere the last few years (and will never go anywhere with current setup).
so i'm totally sure they would not do that again and totally make up some fake future robot business to keep the stockprice up while the car business struggles.
had my faked tesla videos mixed up, this one was 2016
Can you quote exactly what was faked. From the article it says:
“The intent of the video was not to accurately portray what was available for customers in 2016. It was to portray what was possible to build into the system,” Elluswamy said, according to a transcript of his testimony seen by Reuters.
I don't think any sane viewer got a different impression during Autonomy Day.
The video carries a tagline saying: “The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.”
Quote:
When Tesla released the video, Musk tweeted, “Tesla drives itself (no human input at all) thru urban streets to highway to streets, then finds a parking spot.”
TSLA has been a great stock. I lost faith in anything coming out of the company years ago. In 2008ish they announced plans for the solar powered Tesla Roadster to launch in a year or 2. 15 years later it still doesn't exist. They sure know how to market and fake it. Maybe they'll reach Level 3 driving autonomy by 2038?
How does one take the TeslaBot video seriously? The capabilities were sad... miles behind Boston Dynamics and clearly far off from having them roam factory floors.
Luckily Elon's an absolute genius, so within a day or two of him tweaking some code these bots will be industry leading. I predict a tripling of market cap to $2.5T.
The video carries a tagline saying: “The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.”
So what's the fake? The car was driving itself, the driver has to be there for legal reasons. Was he just there for legal reasons? Obv not, he was ready to take over if something goes wrong which it did many times while developing it. Later they gave many test drives to people in the audience doing the exact same thing so clearly the failure rate was not that high.
The video was not faked, Elon just said some slightly misleading words about the video. It happens when you improvise that you say things you don't mean exactly like you said it. The goal was aspirational to eventually not need to have a driver.
When Tesla released the video, Musk tweeted, “Tesla drives itself (no human input at all) thru urban streets to highway to streets, then finds a parking spot.”
In the video it did this. Can it do it today? V12 clearly can. Could it do it then? Yeah from one hardcoded parking lot to another hardcoded parking lot that they had overfit their training towards and even repainted some lines by themselves to fix issues lol.
Again what exactly is the fake? Say it with your own words.
Yeah from one hardcoded parking lot to another hardcoded parking lot that they had overfit their training towards and even repainted some lines by themselves to fix issues lol.
how about they put this in the tagline of the video instead of misrepresenting and hyping fsd capabilities?
the car still can't do the stuff in the video consistently, without a safety driver, today.
how about they put this in the tagline of the video instead of misrepresenting and hyping fsd capabilities?
the car still can't do the stuff in the video consistently, without a safety driver, today.
7 years later.
Tagline? It was a part of a 3h presentation targeted to recruit engineers where they went into details about hardware and software and gave the audience plenty of time to ask questions directly. That media takes it out of context and adds spins to it and claiming it is fake is not their problem. What does fake even mean? CGI?
Do you have any source for the claim that they cannot do the stuff in the video consistently? I would guess they have solved that drive pretty well by now. And what does solve even mean, have humans solved driving?
But yeah, clearly they still get interventions with the public version and with V12.
It turned out that solving FSD was a lot harder than Elon, Google and all the other experts thought. I remember listening IRL to Chris Urmson who hoped his kid would never have to learn to drive about 15 years ago. But as they keep adding magnitudes of more compute, data and refinement performance keeps going up and at some point they will get there.
Anyway, thank you TeflonDawg for reminding me to take a break from this cesspool of a thread, with bears more interested in winning the debate than understanding the stock.