Isn’t the only thing OpenAI did was: throwing a half baked model out for the public to go ham on? I was at Google when they did this and we already had working LLMs internally, they just weren’t good enough to release without PR backlash. I don’t see why such a pithy “advantage” should have led to anything other than a moment in the spotlight? The “we have no moat and neither does OpenAI” essay was published very shortly afterwards.
If anything you ought to expect them to be behind, since they took the position of making all the mistakes first so others (who already had the same or better tech) didn’t have to.
> throwing a half baked model out for the public to go ham on?
I think that’s underselling their contribution, which I believe is mainly: it’s possible and this is what it looks like as a product. Until that time, nobody had figured out how to shape it as a product, and ChatGPT showed how to do that. Don’t forget that for a year or two they kept making headlines all the time with Dall.E and whatnot.
For me it seems like what happened after that is where the lack of focus started to hurt them: they realized that models themselves will be a commodity and have no moat, and that they needed to somehow build a network or something to keep pulling people back in. Sora was one such attempt, and it failed hard.
To me, enterprise / B2B seems like a much easier, obvious market to approach, but I don’t know a lot about B2C. But it seems like B2C was what OpenAI was going after.
> Last Tuesday, we shipped a new feature at 10 AM, A/B tested it by noon, and killed it by 3 PM because the data said no.
I've worked with Amazon Weblab and even Amazon doesn't have the kind of traffic you'd need to make statistically significant conclusions on that time frame (with a few very obvious exceptions, like the whole site being down/unusable/etc). People need to calm down with their "hustle porn", unless they don't mind their credibility being in the toilet.
With LLMs (and colleagues) it might be a legitimate problem since they would load that eval into context and maybe decide it’s an acceptable paradigm in your codebase.
At least two of YC's early (mid-aughts) "huge" successes come down to PG unilaterally (or with some help from JL) making some kind of "weird" call. AirBnB and Reddit come to mind. Even Stripe can be traced to him since he basically created the Auctomatic team (Patrick Collison's previous YC entry).
In other words, PG had the "knack" for sometimes encouraging the right weird thing. I'm not sure it's been the same since he handed off the reins, like any other formerly-founder-led company. Nowadays it really gives off the vibe of bean-counting and hype-chasing.
I don't think it's gotten quite as bad as this [0] article suggests, though.
Unintelligent people can also be right, or lucky, etc, and someone judging on those criteria can end up getting swept up in making some very bad decisions based on dubious advice.
One the most important lessons I ever learned in my career was not to mindlessly disregard a known bullshitter. He'll be right enough that you'll look foolish even if he hasn't earned his reputation.
This likely takes 500mb+ RAM, TFA probably didn't account for tauri://localhost in their calculation, which by itself takes 200mb+ RAM. Then your app process will take 100mb+ RAM, and there will be a couple of other processes besides.
Tauri is no better than electron in terms of RAM, just like people calling it "lightweight" are no better than flat earthers. Let's hope they come around.
If anything you ought to expect them to be behind, since they took the position of making all the mistakes first so others (who already had the same or better tech) didn’t have to.
reply