In my experience, the congestion data is not the issue: even with the split across Google / TomTom / Here / Apple / some hyperlocal alternatives, everyone seems to have reasonably good idea where the traffic jams are. Having up to date POIs is a different can of worms only solved by Google, not by some clever algo, but through the sheer brand recognition. They're the only ones that have this data fed to them by POI owners.
Idea: an open website/app that updates POI information on every known platform simultaneously. To POI owners it would be a win because it means their info is reaching more people but there's not much extra work. The hard part is spreading the word, but I don't see why an open project's donations can't go towards marketing. The marketing material can also be used to promote the existing open platforms like OSM and some of the open review websites(mangrove.reviews and lib.reviews for example), such as sending free window stickers to POI owners that say "Review us on **!".
I commute using one of the several paths and there's only a coarse relationship between the times and paths congestion, but there's absolutely no certainty regarding the actual congestion. Single broken down vehicle on the single lane road can cause backpressure to the key places and make non-obvious variants MUCH better.
As a result I always drive/ride with Waze (I know! ) and I'd love some alternative. Google maps is too slow.
Well, Google has the most popular OS. Tomtom is not far from being the OS of lots of cars. Here, too. It's owned by Daimler/BMW/Audi (sold for 3 billions). Apple has the second most used mobile OS.
So yes, for these huge actors, it's quite easy to create congestion data.
The vast majority of POI churn information comes from streetview + machine learning object detection + automatic change detection + human verification. There are many clever algorithms in play through the entire pipeline. As moats go, its probably bigger than search.
You forgot the massive user base of Google. Users through comments are the first to tell Google that the POI changed. And shop owners are often the first users to worry.
I believe they'll get profitable sooner than their frontier competition. Their operating costs seem to peanuts compared to the providers they're compared to most often while having the local advantage of not being Chinese nor American.
How do you feel about the responsiveness of gemini-cli? I tried it on a paid plan and the 10-minute hang-ups (per step, not the whole plan execution) really break the illusion of performance gains, unless you run it in the background and do something else in the meantime. It's more noticeable when Americans are awake.
Couple of years ago I was (as a human being, not my career span) 20. Spare for the usual StackOverflow / blog snippets, that was my experience and I suppose most of those just starting out. I think it's very recent to have fresh grads that barely type code themselves.
GeminiCLI is absolutely terrible, nothing comparable to the browser access. I've started using the 'AI Pro' tier lately and I get 15 minutes response times from Gemini 3 'Flash' on a regular basis.
Please don’t strawman me, I asked completely different question.
It’s not about being grateful or something, but that many people (devs) are too concerned about their code being stolen as if they’ve come up with something unique and the LLMs are some kind of database (which it isn’t).
At the end of the day we’re going to be using AI to write all the code, many of us already doing that. And if some GitHub copilot model would be better - we’re getting more quality code that is generally available for next pretraining runs (for your and other models). Some would even switch to copilot if it’s good.
If something is mine by right, no matter how little or lot worth it has noone shoule be allowed to force/trick me to donate it. It should just be my choice.
Yeah but I’m just really curious why so many people are so against improving the models. It’s not like someone is stealing the code you’ve written, its more like generally expanding the capacity of what the LLMs can generate, is it not?
They've had ample access to the final output - our code, but they still hope with enough data on HOW we work they can close the agentic gap and finally get those stinky, lazy humans that demand salary out of the loop.
Quite simply, that's just a matter of the corporate internal policy and its (lack of) enforcement. This problem is just a subset of the wider IP breach with some people happily feeding their work documents into the free tier of ChatGPT.
reply