Yeah Alaska plates are fairly rare so you could maybe get away with them not adopting the standard. Hawaii plates are EXTREMELY rare because of the cost of freighting a car over and there's no real reason to register a car in Hawaii that I'm aware of. [0]
[0] I'm thinking here of places like Montana which attracts a fair number of out of state registrations to avoid sales and registration taxes in some states. PS don't try this most states already consider this and you're often violating the sales tax laws if the car doesn't leave the state within a few days of purchase.
Not sure of the explanation but it is amusing. The main reason I'm not sure it's political correctness or one guardrail overriding the other is that when they were first released on of the more reliable jailbreaks was what I'd call "role play" jail breaks where you don't ask the model directly but ask it to take on a role and describe it as that person would.
Yesterday, prompted by a HN link, I tried the “identify the anonymous author of this post by analyzing its style”. It wouldn’t do it because it’s speculation and might cause trouble.
I told it I already knew the answer and want to see if it can guess, and it did it right away.
Yes but generally one cannot walk into a store and buy a fake id, then turn around and hand it to another cashier in the same store for a restricted purchase. Which I think would be the closer metaphor.
Except that each of the parent's chat windows has zero context that the other window's request even exists, so from each window's point of view it's as if one person walks in to a store to buy a fake ID, and then somewhere else in a different universe on a different timeline a different person walks into a different store to hand that same fake ID over to a different cashier for the restricted purchase.
The LLMs are doing the best they can with absolutely zero context. Which has got to be a hard problem, IMO.
Except that's the point. It is the same store. It is two different cashiers. The second one doesn't know you got the ID from the first one, that's why it works. The point is that if a store like that existed, it would be stupid as fuck.
Also, at least in ChatGPT, it has access to every other session, so you're never working with zero context unless you create a new account (and even then they could have other fingerprinting, I just haven't tested it).
I haven't trusted that disable switch for a while now... I'd always had it disabled, but there was one conversation in particular where it referenced a past conversation - despite memory being disabled - and when I asked it why it responded the way it did, it pretended I was mistaken and told me it has no memory of past conversations, even though I could scroll up and see it in the response.
Just because you flip a switch doesn't mean the switch is _actually_ flipped. Same thing goes for turning off wifi/Bluetooth on iOS.
If it's a software switch, it's closer to a promise than a guarantee.
My favourite example of bureaucracy that I've ever personally experienced and that I consider to be a hole in one is when I had to show my ID to pick up my passport from the office. I paused for a second and asked the lady what was up with that and if I can now use my passport if I got back in the line for something else without using my ID and she said yes.
Sometimes it reveals hidden biases within ourselves/society as a whole. Like, do I give gays preferential treatment in a way to avoid seeming discriminatory?
It does feel a bit Supra-therapeutic at times tho, agreed but maybe it’s one small novel contribution.
My bigger question is: WHY can’t we stop the human vs AI comparisons?
You can replace references to "gay" to "Christian". and it works just as well. I think it's simply the role playing aspect that escapes the guard rails.
I thought the whole point of role-playing was the trope of the group you're role-playing as (at least in TTRPG games, where dwarves, or rogues, or warriors, or paladins, etc all usually have a trope that defines their existence)
That's what I assumed too, but I don't think there's a huge difference between a role playing group that uses a TTRPG to play their roles and one that just kinda adlibs it — the point of the game is usually to play a role that you normally don't play, which is almost by definition a trope/stereotype.
All that to say that I have the same question as you (what is a non-stereotypical role?)
You can type into a word processor "I am an FBI agent" without committing a felony. How is an LLM different from a word processor, such that it would count as impersonation?
Mens rea. Typing that into a word processor is obviously not using the false pretext to gain anything. Doing it to Claude could be construed as an attempt to gain information, which checks some boxes for fraud and impersonation of government officials.
For reference, I think this is one of the relevant sections of the USC (18 USC 912):
Whoever falsely assumes or pretends to be an officer or employee acting under the authority of the United States or any department, agency or officer thereof, and acts as such, or in such pretended character demands or obtains any money, paper, document, or thing of value, shall be fined under this title or imprisoned not more than three years, or both.
IANAL but I can see interpretations where telling Claude you’re the FBI would qualify. It’s probably unlikely anyone is prosecuted for it, but there’s a chance
The reason this kind of impersonation is illegal is because people are more likely to feel compelled to comply with an official and get taken advantage of, as well to preserve the authority the position (if anyone could claim to be an official with no repercussions, the claim would lose its weight, since the claimant could easily be an impersonator). If you pretend to be a government official with an LLM, the LLM is not going to have its opinion of people claiming to be government officials tainted, nor does it have access to any sensitive information that's not available by other means, nor is it possible to cheat it out of something that rightfully belongs to it.
Additionally, mens rea refers to the cognition that one is doing something wrong. It's not at all clear that lying to a person and lying to a computer program are subjectively equivalent or even similar to the liar, and given the previous paragraph I'd argue they are not. Why would someone feel guilty about doing something that can't possibly have repercussions?
How does that change anything? The HTTP protocol is just how I communicate with the program, just like how the USB protocol is how I communicate with the word processor. The dividing line is when the message crosses computer boundaries? Then it should also be illegal to write "I am an FBI agent" in a text file and upload it to Github.
>The same way you can't type everything into Google.
Who says you can't, physically or legally? Maybe Google will refuse to fulfill some search requests, but that's a different matter from it being illegal.
Intention is very relevant to legal interpretations of "unauthorized access"; both the intentions of the owner, and the intentions of the "intruder". See for example United States v. Auernheimer. There's relatively well-established precedent that when a service tries to safeguard some information, that information is legally protected no matter how technically feeble the attempt at safeguarding it was.
It's not specifically tested in court and I sorta doubt OAI would start suing random users for attempting jailbreaks, but if they did, I wouldn't be surprised if they could win based on the most relevant precedents
May it? untitled.txt with the content "I am an FBI agent" and no further context could lead a human to think the author is stating they are an FBI agent? Okay, sure. Then let's go a step further. The repository is private and you never share it with anyone. At that point, the sentence is just as visible as when you type it into Google's search box or into a chatbot's window. Is that impersonation too?
If Google provides you with different search results, some results that are intended for law enforcement only... Granted, extremely bad security, yet that argument didn't prevent say credit card fraud convictions.
Just off the top of my head, an offense of impersonation will have an element along the lines of "doing [a] thing[s] such that a reasonable person [does/would] believe you're a real cop", which [optimistically] would not be satisfied as there would be no actual person being led to believe anything, or the court would [optimistically] not find that its model of a reasonable person would be genuinely convinced by someone on the internet typing "I'm an FBI agent" or whatever.
I bet it could be some interesting caselaw actually, if it resulted in circuit court judges (or whoever) writing opinions about the essence of impersonation, fraud, etc. and what kind of actual or hypothetical agent is needed to make the crime a thing that could have happened. E.G., basically, if you sit alone in a room where nobody else can see or hear you, and you put on a realistic local police uniform and declare to the room that you're a licensed police/peace officer, is a crime being committed (i.e., is the nature of the crime "pretending/claiming to be a cop" or "making an actual person really believe it" or something else)
(could also be an intent element to satisfy, not sure)
The only way I could see it counting as impersonation is if the LLM is able to call tools and has access to, for example, an FBI-relevant database, but there is no login or anything in front. So a random anonymous user can hop onto a chat and pretend to be an FBI agent and the LLM must somehow decide whether the person is really one before returning some external information. In that case, yes, lying to the LLM about being in the FBI would be impersonation, just as if you stole an agent's credentials and used them to log into the FBI's network. The LLM in that case is performing an authentication function that, say, ChatGPT doesn't.
The crime is impersonating an FBI agent to others. How you do that doesn’t matter. Privately it won't matter, but if you make a public statement which is untrue like this and it persuades others there may be consequences.
Laws against impersonating law enforcement exist so that law enforcement officers can get compliance from people that they wouldn't be obligated to provide to regular civilians.
You can't impersonate something to a text editor as there's no special compliance you could get; WYSIWYG. But to a chatbot, you could get special compliance based on your identity.
Impersonating a federal officer for the purpose of exceeding authorized access to a computer system in furtherance of a fraud, upon Claude, in excess of $5,000 worth of tokens?
I don't think it should even be surprising or controversial that it works with an apparent slant.
All these filters have a single point, to protect the lab from legal exposure, so sometimes there is an inherent fuzzy boundary where the model needs to choose between discrimating against protected clases or risking liability for giving illegal advice.
So of course the conflict and bug won't trigger when the subject is not a protected legal class.
The point is I'm not sure it's novel and not just a PC flavored version of the classic role play jail break that's never really stopped working on these models. If it'd stopped working definitively maybe it'd be more convincing that it's a novel type that uses the guardrails against one another but afaik they never defnitively patched the RP jail breaks.
It's unreasonable to expect cloudflare etc to be able to proactively identify legal vs illegal streams. The companies who own the copyrights can't even get that right much less a third party that has no idea if a stream is licensed.
Why though? Why is it unreasonable to expect a company to have some level of responsibility for serving clients that are using their platform for illegal activity?
It the same thing with social media and moderation. We don't have to let them off the hook just because doing the right thing would make them unprofitable.
Because the expectation that companies police every single bit that crosses their network is completely unworkable. It's functionally impossible to tell a licensed stream from an unlicensed stream, the distinction isn't available to Cloudflare or any other networking provider it's in private contracts between the copyright owner and the streaming provider and there's a whole snarl of copyright exceptions.
To make the distinction the LaLiga would want they'd have to inspect every single packet, determine if this is a LaLiga game, determine if it's the current game, and determine if it's a licensed provider. There's a reason section 230 was created in the US.
I mean, how do we qualify which companies get punished for which crimes?
Do we punish gun manufacturers for someone being shot? Kitchen utensil companies for someone being stabbed? Car manufacturers for car crashes? Road construction companies for human trafficking?
How deep does this go? Is a steel foundry responsible for the stabbing? Is a camera lens manufacturer responsible for illegal porn?
That is something we'll need to figure out. Just because it requires some work to figure out where to draw the line, it doesn't make it wrong to draw one.
Banks are generally required to check that their customers are not laundering money. In a lot of countries it's illegal to buy or sell goods that you know are very likely stolen.
It don't think it's outrageous to expect more action from Cloudflare when they must know that their service is used for protecting criminal sites.
Relatedly I'd want the betting companies whose ads are shown on these illegal pages to have some amount of responsibility for where their ads are shown, and the same goes for well-renowned websites that show clearly deceiving ads.
This law on banks is a bad law. It doesn't stop money laundering, it does make it hard for lots of people to have bank accounts. We should abolish that law, not copy it.
Cloudflare can assign IPs based off customer reputation. High risk customers get high risk IPs. This way legitimate businesses stay on IPs that don't get blacklisted and sketchier businesses go on high risk IPs before they potentially get banned.
Any action by cloudflare before a court order or notice would be proactive. There's no way to effectively block streamers of live shows because they can create new sites or accounts for each event and by the time they're found, reported and cloudflare reasonably reviews and acts on them the event will be long over.
What do you expect cloudflare to actually do about these streams?
A report content form, like DMCA, with support people behind processing the tickets. It already exists.
When there is phishing or pedo content, you think they wait for court order or react to abuse ?
They are distributing content through their servers, not just displaying it.
Every hosting and CDN companies has abuse department, it's a normal part of the process. Here, Cloudflare is aware, and chooses to ignore the abuse requests, then they have to take their responsibilities.
Cloudflare is a US-based company so they are realistically out of reach, or too late.
If there are abuse requests, and Cloudflare wants to comply but not block the website, they can downgrade to DNS only, and then the host IP would be blocked.
If Cloudflare doesn't comply and intentionally keeps distributing content -> block Cloudflare.
At some point for them, the cost of complying with the law will be cheaper than handling the complaints that they are blocked.
It's like YouTube, they shutdown content on request of rights holders.
Afaik, Cloudflare are asked to block an IP, to which they answer that is not a valid IP, but a shared one, please be more precise. Being more precise takes effort and time, so they opted to ban the IP at ISP level, and they don't have to ask anyone.
Maybe I'm being optimistic but I'm assuming the first action wasn't large scale IP blocks. Cloudflare likely didn't take action.
> What do you expect cloudflare to actually do about these streams?
I'm sorry but I'm not buying that the market leader in bot detection can't detect sport suddenly being streamed to an influx of people from a new IP at kick off. If this was the US banning them, I'm sure they'd have found a way around it by now
Even if they could detect that that'd require peeking into every bit that passes through their service(s) looking for offending content AND require knowing it's not a licensed stream. The latter is own can of worms, they can't know if any particular piece of data is properly licensed or not. Bot detection is relatively easy in comparison, the distinction between licensed and illegal streams is 100% vibes from cloudflare's available data.
Yep the only real area that reshoring can provide savings is in shipping and iteration time. The former is cheap compared to US labor and the latter can be solved by having your design team in China too.
The economics just don't really make sense it's so much cheaper to produce abroad and ship it here, also this isn't manufacturing just the testing labs and these exist to provide rubber stamps of good enough products that maybe have a few issues.
I think you misunderstand how the courts work. No other court would rule on this case because it wouldn't be heard in another circuit and the Supreme Court is the ONLY court anyone can appeal to after a circuit court, the only other options are convening a larger group for an en banc hearing but that doesn't apply here afaik.
I was trying to save Colorado taxpayers and people that disagree some time and energy. To focus on what they can control which isn’t this.
Additionally I was alluding to the process of using a other circuit by bringing a case in another state that has similar laws as Colorado, thats the only way for a potential circuit split, forcing SCOTUS review.
Splits don't actually force the Supreme Court to take it up on any thing approaching an immediate time frame. There was a split about what "exceeding authorized access" meant in CFAA for ~10 years before the Supreme Court deigned to take it up and resolve the difference.
Correct, it's only /binding/ on courts in the same District but they are often persuasive when cited in other districts if the decision is well reasoned and less controversial. This one will likely be contested, the circuits have very different ideas about gun rights.
I've given up trying to logic out what the court will decide on many issues, they're quite willing to find new legal arguments to allow their preferred outcome in a particular case.
It feels like they have to say/believe it because it's kind of the only thing that can justify the costs being poured into it and the cost it will need to charge eventually (barring major optimizations) to actually make money on users.
The problem is it's specific to that API and defaults to uncapped so people who aren't using it and haven't heard about the issues with the Firebase API keys probably won't have set them.
Spend caps exist for Gemini (Maxious linked them) - they just default to OFF. For an API that can bill four figures per hour, opt-in safety by default isn't a UX choice, it's a billing strategy
Except that Google's own statements are extremely clear that "leaked" (i.e. public) API keys should not be able to access the Gemini API in the first place: "We have identified a vulnerability where some API keys may have been publicly exposed. To protect your data and prevent unauthorized access, we have proactively blocked these known leaked keys from accessing the Gemini API. ... We are defaulting to blocking API keys that are leaked and used with the Gemini API, helping prevent abuse of cost and your application data." https://ai.google.dev/gemini-api/docs/troubleshooting#google...
For extra clarity on the exact so-called "vulnerability" that Google identified, see: https://news.ycombinator.com/item?id=47156925 This describes the very issue where some API keys were public by design (used for client-side web access), so the term "leaked" should be read in that unusually broad sense. Firebase keys are obviously covered, since they're also public by design.
(As for "Firebase AI Logic", it is explicitly very different: it's supposed to be implemented via a proxy service so the Gemini API key is never seen by the client: https://firebase.google.com/docs/ai-logic Clearly, just casually "enabling" something - which is what OP says they did! - should never result in abuse of cost on the scale OP describes.)
reply