> The real 'next big thing' would be integrating an engine like Gemini with OS-level hooks (similar to the OpenClaw approach) so agents can manipulate app windows and state directly. Resurrecting Web Intents as 2-way App Intents would be the key to making this work.
I think for something like this, it will only work if you can allow you local files to get messed up by the LLM but then, because everything has been synced to the cloud, there's a safe "revert" option.
I'd love that built on a Linux foundation too, but realistically reckon if they're going down that path they've got the core of "all your app state can be backed-up/transferred" already in Android so they'd likely lean heavily on that.
Goes away, or is liable for the content promoted to the frontpage under the OP's take?
But I'd agree, that it's personalisation rather than just curation that's the issue.
I think even requiring sites to have a "bring your own algo" version (and where ads are targetted to the algorithm, rather than the person) would cure a lot of ills.
As is, even with something like Spotify where you _are_ paying there's no easy way to "reset" your profile to neutral recommendations
> Goes away, or is liable for the content promoted to the frontpage under the OP's take?
Same thing. There is no Hacker News if Y Combinator becomes liable for user submitted content.
It’s an obvious backdoor play to make sites go away. If a site becomes liable for content posted, you cannot allow users to post content without having the site review and take responsibility for every comment and every post.
The people proposing it haven’t considered how damaging that would be for the ability of individuals to share ideas and their content. When every site with “an algorithm” is liable for content posted, nobody is going to allow you to post something. It’s back to only reading content produced and curated by companies for us. Total own-goal for the individual internet user.
I think you could finesse it by saying that on HN, the users submit the content and the users also determine (by voting) what is popular. Ycombinator doesn't promote or bury any particular post with their own algorithms; they don't exercise any editorial review or control. (I don't think that's exactly true today, but it could be).
But to the larger point, I would actuall agree that sites should "review and take responsibility for every comment and every post." They are the ones amplifying and distributing this content, why should they have zero responsibility for it?
Yes that would dramatically change what gets published online, but I think that would be a good thing.
And how do you think any other website decides what to recommend you, if not other users' actions? Remember the Netflix prize? The data set they gave you is how other people rated movies. You can absolutely build a recommendation system without manual input from the operator.
And HN absolutely does promote submissions at the moderators' discretion. The moderators sometimes give old but overlooked submissions a second chance, they also turn the flamewar detector on some stories that they think deserve more attention which effectively promotes them against users's will.
So do you think the same logic applies to ISPs? Should they be reviewing all the content that they allow to transit their network and ban you if you try to evade their controls by using uncrackable encryption because if they mess up and allow you to distribute copyrighted or defamatory material they will be held liable? Remember that section 230 was originally enacted to protect them from liability.
No I don't think it applies to ISPs. They aren't involved in selecting or soliciting the content, or providing the sofware and platform that creates or distributes the content. They are "just pipes." Their purpose is to move bits.
This is not a correct understanding of ISPs though. They do already have certain obligations to restrict content on their networks. In particular they are required to remove subscribers when they become aware that those subscribers are participating in copyright infringement.
Importantly all except one of those things is impartial to the user, and even that one is merely binning based on a single category. Algorithm here is a red herring IMO people are objecting to a couple fairly specific things. One being personalization carried out by the other party, the other designs that introduce partisanship or are detrimental to the end user (ie addiction and other dark patterns).
> They are the ones amplifying and distributing this content, why should they have zero responsibility for it?
If LinkedIn started allowing hardcore pornography, many of their advertisers would leave.
With that in mind, are you certain LinkedIn takes “no responsibility” for the content they distribute? It would seem they have a multimillion-dollar stake in the outcome of their efforts to shape their commercial product.
The main difference is that HN uses time to segregate cohorts and TikTok uses interests to segregate cohorts. If enough people within these cohorts upvote / give watch time then the content is shown to more cohorts.
I understand the basic principle. Clearly that's one of the inputs. What I'm questioning is your implied assertion that there's nothing else to it.
I don't for a second believe that tiktok (or facebook or any of the others) employs a primitive algorithm that impartially orders results based on a simple and straightforward metric without consideration for their own interests.
>I don't for a second believe that tiktok (or facebook or any of the others) employs a primitive algorithm
Is your contention that whatever future law have some mechanism to decide the complexity of the algorithm? How would you design a law such that the reddit ranking algorithm is primitive, but tiktok's algorithim is "advanced".
You're changing the subject. I said nothing about the law, only objected to a claim about the internal mechanisms of tiktok.
If we're discussing hypothetical laws then my preference is for several. Banning various dark patterns (what the EU is doing here), banning opaque individualization outside the control of the individual in question, and banning motivated editorialization (such a intentionally promoting a particular political position). And yes, a straightforward application of what I wrote there would make the netflix recommendation algorithm as it currently stands illegal. I have no problem with that.
I agree with what OOP said. But it’s not my intent to “shut sites down.” I have this view to try to increase diversity of media consumption and break people out of echo chambers. If your business model is so shit you have to exploit weaknesses in human brains to keep people viewing ads and can’t adapt, then that’s your problem.
If you have an algorithm whose sole purpose is to “engagement” with your own platform (by intentionally and purposely pushing clickbait, ragebait, and media that keeps reinforcing your clicks) you should no longer get section 230 protections - you are no longer a neutral party. These algorithms exist to create echo chambers and keep you clicking so you can consume more ads.
I would love to hear other ways of solving the problems of social media.
> I have this view to try to increase diversity of media consumption and break people out of echo chambers.
Making sites liable for all user-posted content would do the reverse of this. Every platform that lets people submit content would have to stop doing that, because it’s an impossible liability to manage.
You’d have to host your own site. You wouldn’t be able to share anything about it on a social media site because its user-generated content. No visitors unless you advertise it through paid contracts with companies that can review it and decide to accept the liability.
Newspaper "Letters to the Editor" manage to do this. Users "submit" things to the newspaper, the editor curates and decides what to keep and what not to, and then the newspaper publishes the user generated content. Just like social media: Users submit things to the site, TheAlgorithm curates and decides what to keep and what not to, and then the site publishes the user generated content.
If web sites and social media can't "scale" to do this, then maybe they should scale down. "Making sites liable for all user-posted content" would not kill social media, but would definitely scope it down to what can be effectively curated.
I don't think there are enough dangs to effectively curate much of the internet, and scaling it back by how much would be the result? 95%? That is before settling on definitions of effectively curate I suppose.
"Effectively curate" here simply means "willing to take legal responsibility for" (although in practice I assume there would be an insurance policy involved because that's just how things are done).
I notice that parent describes "engagement" algorithms and you somehow jump to "all sites". So I think we'd see "engagement" algorithms disappear and very primitive approaches with prominent transparency measures in place would replace them. I expect we'd all be better off were that to happen.
"letters to the editor" curated by employees would become a part of their business model and regular contributions would go away? Why would that assumption be incorrect? I wouldn't run a website where a casual user having a moment could result in my imprisonment. I would only allow non-lbtq content that didn't mention race or immigration, as the chilling effect there is real. A DA would for sure come after me if my site became influential.
It's a matter of resources, not corporate status per se. For better or for worse, the current status quo largely democratizes content promotion. You and I can post these two comments here and put our ideas and names in front of a bunch of strangers for $0.
In a world where the risk-adjusted cost of allowing third-party comments on your platform shoots up, someone has to pay that cost. A personal blog hosted on your server might struggle to find any significant reach without a real advertising budget, because distributing speech/content that promotes your platform would no longer be ~free.
I don't necessarily believe that the major social media platforms would fully evaporate, but I'd expect some or all of these changes across the ecosystem:
* Massively scaled up LLM-based moderation/censorship.
* Replacement of direct user content posting with an LLM-based interface (to chat with an LLM about what you want it to write on your behalf).
* Payment-gated public posting, e.g. monthly or per-post fees to cover liability/insurance and/or LLM inference costs. Possibly higher fees for direct authorship vs LLM pair posting.
* Massive rise in adoption of decentralized architectures, either via current mainstream platforms if legally tolerated or via anonymous dark web platforms otherwise. Maybe Tor becomes as normalized as VPNs, or maybe the Western legal environment shifts hard against general-purpose computing.
I understand where this sentiment is coming from, but I think it's taking a lot of the current status quo for granted. What you guys are proposing isn't necessarily a targeted change that would simply make bad guys stop doing bad things. It's more likely a massive structural change that would dramatically alter the social and economic fabric of the internet as we know it, and not in a way that most of us would like.
If YCombinator has to officially approve every article submitted, then it will become a publisher of a news site, not a social media site. Essentially, it would be a New York Times site with unpaid writers.
Well the argument was that Hackernews would no longer exist, and I asked why and your response was that it would be like the NY Times, but the NY Times website does exist so I don't understand what point you are trying to make then.
Got it. If the page doesn't fulfill the original purpose that people wanted to go to it, it ceases being interesting. The fact that the page merely exists is meaningless, much like a blank website.
Well, you pointed to the NYTimes which, again, has not changed, so what is your point? Maybe the NYTimes is not a good example? I don't know, you brought it up. Are you saying the NYTimes is not an interesting website? It seems to also have the news and discussion of the news, so what exactly am I missing?
> Some moron always show up with the "but it was all the EU subsides" talking point, which is quite frankly part of racist tropes of eastern Europeans being dumb and worse than westerners. Could you imagine them accomplishing anything on their own? That's ridiculous. It's us, the western saviors, who did this with our penny subsidies!
Ireland were in a similar position for instance (received €40bn in EU subsidies in the first 45 years of membership; now a net contributor).
I'm wondering how much of the net contribution comes from tech companies and how it compares to the loss of taxes due to Ireland acting as a tax haven for tech companies.
EDIT: Net contributions seem to be $3bn/year (total, independent of tech) while loss for other EU countries due to corporate tax evasion is $6bn/year.
Case in point - My first webdev job was producing a site for the city library. My boss explained to them when going through their sitemap that they should to rename their planned section from "Lending" to "Borrowing".
> Chaining nudges you toward “process everything,” even when that’s not what you meant to do.
It feels pretty clear that the chains in that example (filter/map) are meant for operating on collections. And that if you're searching for a single item then chaining isn't the way to go?
Personally, if I knew I wanted only a single item I wouldn't feel more "nudged" towards appending a [0] on the end of a long chain rather than doing a refactor to the find().
As to:
data
.transform()
.normalize()
.validate()
.save();
here the problem isn't that you've done method chaining, it's that you've named your functions with terse names that you're going to forget what they do later on e.g. a generic "normalise" vs a "toLowerCase()" or whatever.
As apples-to-apples unchained equivalent isn't really any better
One benefit of the more verbose lower example is, that when it fails somewhere you get a clear line where it failed so can focus already deeper. In chain, unless I am mistaken here (maybe mixing java and javascript so sorry if I am off), its just a general error somewhere in that chain.
Nothing earth-shattering, but who never had to debug something someone saw in production once and never again, then any additional useful info is weighted in gold.
My undergrad was in humanities - we had multiple essays to submit throughout the year but they only counted for ~10% of the final grade, with the rest through an in person exam paper.
It was just expected that you had a grasp of the literature enough that you could argue off-the-cuff in the exam setting, and then you were given leeway if you didn't have exact Harvard style notations to exact date/titles of referenced material.
> Prosecutors argued that they had a right to demand material that Heppner created with Claude because his defense lawyers were not directly involved, and because attorney-client privilege does not apply to chatbots.
>
> Voluntarily revealing information from a lawyer to any third party can jeopardize the customary legal protections for those attorney communications.
>
> Manhattan-based U.S. District Judge Jed Rakoff ruled, opens new tab in February that Heppner must hand over 31 documents generated by Anthropic's chatbot Claude related to the case.
>
> No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote.
If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.
Leaving AI aside, what in particular makes this different from using any other cloud-based software? Does writing a Google Doc to gather my thoughts or a draft email in Gmail constituent "revealing information from a lawyer to a third party"?
What if Google have enabled AI-features on these? Feels like this area really needs clarity for users rather than waiting for courts to rule on it.
> If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.
Absolutely wrong in the U.S. The police can't just break into your home and demand it, but a judge can 100% mandate discovery or a subpoena if there is reason to believe that evidence exists which is relevant to the case.
The 4th amendment prohibits UNREASONABLE search and seizure, and we let judges make that determination. You never have absolute privacy rights.
Note that the judge is bound by precedent and law as to what "unreasonable" means, they can't just make it up as they go along unless there is no precedent. Otherwise the case can be reversed on appeal.
I was on a jury recently where we had to swap out judges in the last couple days of the trial. The reason was because the judge had been assigned another case where the defendant had not waived his right to a speedy trial. The judge wanted to finish his existing case first, the defense lawyers said "You can't do that", the judge looked it up and found out that indeed they were right, so off he went to start the new case and handed off the existing one to a colleague. In my experience judges really do take the law seriously - that's how they get to be judges.
My understanding is that judges have certain specialties - one judge might be well versed in a particular area of law but not other ones. The case I was on was an area where nobody in the district had expertise, and everybody (judge, prosecutor, defense, jury) was learning as they go. The new incoming case was one that was in an area that our previous judge would normally handle. So it was assigned to him because it came in through his department, while the case I was on was sort of a free-floating orphan where not much was lost by having another judge handle it (and it was also already in the jury instructions phase, with testimony complete).
This. All of your rights are up for debate under a judge. There’s only a few you can still exercise if a judge wants something from you but ultimately if a judge decides it’s relevant to the case, it’s relevant to the case and you must comply. Or be held in contempt. Or praise? With a senate hearing to boot. I’m confused on how our legal system actually functions now but that is how it’s supposed to be. If a judge decides to include it, it’s in. Go get it.
One of my friends recently spent some time getting an OpenClaw instance running in Ubuntu so he could have a truly private conversation with it, complete with an air gap.
The value of that configuration has just been greatly magnified.
Has it? There's value in privacy vis-a-vis snooping corporations, but those conversations could still be surrendered to the court if the judge rules them potentially relevant, and if your friend refuses to do so, he'd be held in contempt of court.
I agree, but it's not like Anthropic was running to tell the lawyers and the judge in this story. The most likely scenario is your friend would just let slip he's using AI, or people who know him would let it slip, and the lawyers or judge will demand the conversations for discovery.
If I was strongly motivated to gather AI analysis of litigation, I think that I would turn to Tor if possible, and remove any specifics from the discussion.
Unless you personally developed the AI to do this, then it is almost certain that any third party AI is harvesting every nugget from you in one way or another. Even when they say they aren't. Like all the other big tech out there, it was designed for the makers not for the users.
This thread is about a locally running LLM, with an air gap.
How can a third party company harvest anything from that? Even if you didn't develop the LLM yourself, if you downloaded it and are running it locally with no internet access, I don't see how it'll leak info to a third party.
> If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.
There is some protection of personal private documents for civil cases. But for a criminal case, there is no 4th or 5th amendment protection for stuff you wrote in your diary.
If you were caught with notebooks detailing your plans to kill a list of people, showing that you've meticulously tracked their movements and listing locations for dumping the bodies that would be extremely relevant. I don't see how it'd be a good idea to exclude that kind of evidence.
Reading the ruling in more detail, this is definitely a "this is not even close case."
First off, the Fifth Amendment right to not self-incriminate is rather narrower than you might expect. With regard to document production, it only privileges you from having to produce documents if the act of producing those documents would in effect incriminate you. So if you tell people "I've got a diary where I've been keeping track of all the crimes I've committed..." the government can force you to turn over that diary.
Second, the default assumption whenever you send something to another person is that it's unprivileged communication. IANAL, but even using cloud storage for things I'd want to remain privileged is something I'd want to ask a lawyer about before relying on. Although that's also as much because the default privacy policy of most services is "fuck you."
Which is what happened here. Claude's privacy policy says that Anthropic reserves the right to share your chats with third parties for various reasons, which means you have no reasonable expectation of privacy in those communications in the first place and automatically defeats any other confidential privileges. What happened is therefore little different from the defendant texting his attorney's responses to his friends, which is a fairly time-worn way of defeating attorney-client privilege.
Seems an opportune time to remember that every day is STFU Friday. And, to quote The Wire, is you taking notes on a criminal fucking conspiracy?
You cannot be compelled to provide testimonial evidence that might incriminate you. Physical evidence, documents, computer files, anything not under attorney-client privilege is fair game for a subpoena or warrant.
This maybe different. Unlike your own personal notes, your lawyers notes and records cannot be subpoenaed. But... the TOS from Claude might be a backdoor. So this is maybe an untested situation (as far as I know). The judge could decide the info is privileged because its an extension of the lawyers note-taking and research OR the judge could say its not privileged because the info was shared with Anthropic as a third party. Anybody know if this has happened yet?
This isn’t really attorney client privilege and would much more likely fall under the work-product doctrine [1,2], where documents prepared for the purpose of future litigation are protected from discovery and could be considered analogous to attorney-client privilege (but is actually much more broadly defined than attorney-client privilege[4]). Google can and does provide emails and documents under subpoena, but courts have ruled multiple times that emails, google docs, etc. were protected under work product doctrine or attorney-client privilege. Just because a third party has it and is willing to give it over does not negate privilege. The “shared with Anthropic” argument does not hold up to precedent when SaaS is used.
Even if opposing counsel is able to obtain discovery on a work-product, only fact based products, not opinion based are allowed. In other words, the court is supposed to remove anything related to “mental impressions, conclusions, opinions, or legal theories of an attorney or other representative of a party concerning the litigation” [3]. For conversations with AI about how to conduct your case, that would exclude basically everything since it is an opinion work-product, not a fact work product. A fact based work-product would be things like “statements or interviews of now deceased witnesses, photographs or video of an accident scene taken at the time of the accident”[4].
If I collected research and wrote down possible legal strategies in a Google Doc in preparation for meeting a new attorney, that would be protected. But if I do the exact same thing in google Gemini, it isn’t because Gemini “is not a lawyer” [5]? He ruled “Heppner did not [use Claude] at the suggestion or direction of counsel [5]” but as I just said, you are protected when self-initiating note taking before meeting with an attorney. The attorney does not have to direct you to do it for it to be protected. Honestly this really doesn’t read as solid reasoning underpinning the ruling at all.
In theory you can have the same on incognito sessions (never stored, that's part of what Italian privacy regulator forced on OpenAI to do) and same for right to deletion as per GDPR.
I don't think this is any different from other cloud-based software. Cloud providers can be compelled to turn over your data, as long as they're actually capable of doing so. If you don't want your data being snarfed up from a cloud provider and used in court, then only use cloud providers with end-to-end encryption, or better yet don't put your data in cloud providers at all.
The only reason this ruling is even remotely interesting is because people don't understand computer systems, and chatbots feel different. For the technologically minded, it should be pretty obvious that typing into a chatbot is no different from typing into a Google Doc, and that the data in both can be available to the legal system without the user's involvement or consent. But most people aren't technologically minded and may not have realized that all of their data is being saved and made available like that.
>If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.
Is that true? I would expect that any notes I have in any form could be requested during discovery (client-attorney priviledge being one of the few exceptions and narrower than people assume).
I'm not American, but isn't this covered under the Work Protect Doctrine (as mentioned further down in the article)? Material prepared with an eye to litigation?
Though maybe not applicable here if the charge is criminal, I thought it was a civil case on first reading.
> Basically...I just don't know what communication medium would allow a company that makes app icons to keep their customers in the loop about updates & concerns related to the product. Are you gonna install a Font Awesome app?
What companies _used_ to do is have "Subscribe to our newsletter" on their site - either for non-account holders, or as a separate checkbox when setting up an account.
Same with email frequency — would be trivial to add "when do you want to hear from us?" as a question "when we release a new font / when we make changes to a font you've purcahsed / only account related".
We have the patterns for all this already established.
reply