I looked at it and it is impressively lightweight. It would help if it could collapse duplicate notifications, right now the notifications page is filled with repeats even though I'm not all that popular on fedi.
If the navigation simulates what would happen if we follow links to SPA#pos1, SPA#pos2, etc so that if I do two clicks within the SPA, and then hit Back three times I'm back to whatever link I followed to get to the SPA, I guess it's OK and follows user expectations. But if it is used as an excuse to trap the user in the SPA unless they kill the tab, not OK.
> From the browsers perspective those are the same thing though.
If the browser only allows adding at most one history item per click, I should be able to go back to where I entered a given site with at most that many back button clicks.
At a first glance, this doesn't seem crazy hard to implement? I'm probably missing some edge cases, though.
Some browser APIs (such as playing video) are locked behind a user interaction. Do the same for the history API: make it so you can't add any items to history until the user clicks a link, and then you can only add one.
That's not perfect, and it could still be abused, but it might prevent the most common abuses.
Clearview again. ICE is using it too, and their people think it is an oracle that is always correct, so that when someone shows a passport card or a RealID showing that they are someone else, a US citizen or permanent resident, they are usually accused of having a fake ID. It's a flawed tool and it misidentifies people sometimes.
I understand why OpenAI is trying to reduce its costs, but it simply isn't true that AI crawlers aren't creating very significant load, especially those crawlers that ignore robots.txt and hide their identities. This is direct financial damage and it's particularly hard on nonprofit sites that have been around a long time.
> but it simply isn't true that AI crawlers aren't creating very significant load.
And how much of this is users who are tired of walled gardens and enshitfication. We murdered RSS, API's and the "open web" in the name of profit, and lock in.
There is a path where "AI" turns into an ouroboros, tech eating itself, before being scaled down to run on end user devices.
These are ChatGPT and Claude Desktop crawlers we’re talking about? Or what is it exactly? Are these really creating significant load while not honoring robots.txt?
Is this the first time you are reading HN? Every day there are posts from people describing how AI crawlers are hammering their sites, with no end. Filtering user agents doesn't work because they spoof it, filtering IPs doesn't work because they use residential IPs. Robots.txt is a summer child's dream.
They seem to mostly be third-party upstarts with too much money to burn, willing to do what it takes to get data, probably in hopes of later selling it to big labs. Maaaybe Chinese AI labs too, I wouldn't put it past them.
And doing it over, and over, and over and over again. Because sure it didn't change in the last 8 years but maybe it's changed since yesterdays scrape?
What does "verified" mean here? You can verify that a real company posted that job, but you can't verify whether it is fake in the sense that they really have an H1B candidate they really want for the position and they are just advertising it to meet legal requirements.
Friend of mine mentioned someone made a site to find those hidden jobs so people desperate for work already in the US can widen their own net. Not really sure how effective they are at it.
Conservapedia had to have a person create each article and didn't have the labor or interest. Grok can spew out any number of pages on any subject, and those topics that aren't ideologically important to Musk will just be the usual LLM verbiage that might be right or might not.
People were literally buying horse dewormer when their doctors wouldn't prescribe it for them. "Influencers" were selling it. So the media were being accurate. To the extent that this made people look dumb, the intent was mostly to shame them into trying something more effective.
You dropped my words "into trying something more effective". Steering people into treatments found to be more effective is, precisely, giving people the best available information. Ivermectin is great if you have a parasitic infection. It doesn't help against viral infections.
This is going to be a huge problem for conferences. While journals have a longer time to get things right, as a conference reviewer (for IEEE conferences) I was often asked to review 20+ papers in a short time to determine who gets a full paper, who gets to present just a poster, etc. There was normally a second round, but often these would just look at submissions near the cutoff margin in the rankings. Obvious slop can be quickly rejected, but it will be easier to sneak things in.
AI conferences are already fucked. Students who are doing their Master's degrees are reviewing those top-tier papers, since there are just too many submissions for existing reviewers.
I think on HN, people waste too much time arguing about the phrasing of the headline, whether it is clickbait, etc. and not enough discussing the actual substance of the article.
You're right, mostly, but the fact remains that the behavior we see is produced by training, and the training is driven by companies run by execs who like this kind of sycophancy. So it's certainly a factor. Humans are producing them, humans are deciding when the new model is good enough for release.
Do the lies look really good in a demo when you're pitching it to investors? Are they obscure enough that they aren't going to stand out? If so no problem.
In practice, yes, though they wouldn't think of it that way because that's the kind of people they surround themselves with, so it's what they think human interaction is actually like.
"I want a chat bot that's just as reliable at Steve! Sure he doesn't get it right all the time and he cost us the Black+Decker contract, but he's so confident!"
You're right! This is exactly what an executive wants to base the future of their business off of!
You use unfalsifiable logic. And you seem to argue that, given the choice, CEOs would prefer not to maximize revenue in favor of... what, affection for an imaginary intern?
You are declaring your imagined logic as fact. Since I do not agree with the basis upon which you pin your argument on, there is no further point in discussion.
Given the matrix 'competent/incompetent' / 'sycophant/critic' I would not take it as read that the 'incompetent/sycophant' quadrant would have no adherents, and I would not be surprised if it was the dominant one.
People with immense wealth, connections, influence, and power demonstrably struggle to not surround themselves with people who only say what the powerful person already wants to hear regardless of reality.
Putin didn't think Russia could take Ukraine in 3 days with literal celebration by the populace because he only works with honest folks for example.
Rich people get disconnected from reality because people who insist on speaking truth and reality around them tend to stop getting invited to the influence peddling sessions.
They may say they don't want to be lied to, but the incentives they put in place often inevitably result in them being surrounded by lying yes-men. We've all worked for someone where we were warned to never give them bad news, or you're done for. So everyone just lies to them and tells them everything is on track. The Emperor's New Clothes[1].
reply