Regardless of the purported upside, many people in the arts feel betrayed by the commercial interests that built this technology on their work without their consent and threatened by the explicit intent of these vendors to devalue their work by saturating the art and design market with cheap automated substitution.
A lot of artists who would love to be able to direct their professional software in natural language have to reconcile that with how this technology came to be and what the aims are of the company now delivering it to them.
I spent most of my career in the open source world and doesn’t bother me models are trained on my output. Should I feel differently? It seems there’s a kind of ego or emotional attachment to the output that is more common among artists than devs? Perhaps abundance vs scarcity mindsets?
Regarding generative images, it's more of an issue because the effects are different.
Software tends to be a "living" project, so just vibe coding with 0 software knowledge is not yet fully sustainable for maintaining a project. But with art, the AI just spits out a completed image.
The generated images compete directly with the people the data was sourced from, and there have also been many cases of abuse, eg people using AI to impersonate a popular artist and selling comissions under that artist's name.
The copyright situation for generated imagery is also tricky, so people pretending to be artists only to be sharing work that isn't copyrightable can cause a ton of trouble and financial loss for customers.
Most of these issues don't apply to software in the same way. That's why I was surprised by the backlash to this as it's just touching the software side, I don't see this as threatening artist's work.
When I was dabbling in image generation (~StyleGAN2 era), my vision for image generation models was as a support tool for artists (back then I was generating small character thumbnails to help me brainstorm ideas for drawing), believing that people valued art for the human effort. Even then I would have considered what Anthropic are trying to do here as the preferable way to use AI in art workflows.
It threatens because we aren’t just talking about selling your art. Artists get hired at companies to produce all kinds of work that will now be replaced by AI.
Artists get hired at companies because companies have the technology that made the artists work profitable, starting from book printing (public performance -> book printing -> cinema -> tv -> internet, similar to drawing -> photo -> digital). At the Public Performance / Drawing Era artists were mostly poor low class rogues. The technology made them what they are now.
They are protesting against natural technology development. To me it looks similar to taxi drivers protesting against Uber (protecting their right to scam tourists).
Did drawing artists protest against photography? Do celebrities protest against photographers selling their photos taken by them in public places?
They are right to be afraid though. What's really happening here most probably is Anthorpic buys rights to collect user trajectory data. In order to replace Blender users later.
I'm an artist turned CTO. My perspective is really simple - theft is theft. You (not you specifically per se) can sugar coat it however you like, but copying open source codebases/work is different from stealing proprietary/licensed work without permission. It would have been ok if stealing/sharing copyrighted work was heavily normalized, but no, a lot of people have gone to prison for simply pirating DVDs and CDs and now you're telling me it's somehow ok if a corporation does it?
How come? We give IP law / copyright legitimacy but it’s not clear to me the more I think about it. If you draw something you def own the physical drawing but owning the idea of the drawing during your lifetime feels strange to me. It’s also a very recent invention and humans created art before and will create after.
I agree that copyright is foundationally wrong, but the way out has to be through a culture shift of people putting their work in Public Domain.
It's not up to a private company to decide everyone else's work is public commons.
The issue is not stealing the idea itself. The issue is stealing the work in its entirety - as is - with all its flaws and character intact. That's what makes art unique, right?
I would think the same goes for codebases too. On a personal note, I wrote a CMS in Elixir from scratch way before even AI was a thing. It uses a lot of proprietary flows to make it scale, helping it serve millions of requests efficiently. I certainly did not give OpenAI nor Microsoft permission to steal my code. And yet they did.
Is that not theft of my Intellectual Property?
> It would have been ok if stealing/sharing copyrighted work was heavily normalized, but no, a lot of people have gone to prison for simply pirating DVDs and CDs and now you're telling me it's somehow ok if a corporation does it?
There is no such thing as "stealing" copyrighted work. Either you have unauthorized access and/or distribution, or you don't.
Unauthorized access to copyrighted work is perfectly legal in a big chunk of the world, including western Europe. Read up on the french tradition of copyright law, particularly the provisions for personal use.
This brings us to how "people have gone to prison for simply pirating DVDs and CDs". The bulk of the cases were focused on mass commercial distribution of verbatim copies of third-party content. I'm talking about DVD-burning factories.
> Maybe true in places with different cultural values like China or India.
No, this is a core trait of the whole concept of copyright.
Copyright is a legal tool to allow authors to claim the exclusive right to monetize their work. But from it's inception this same legal tool is designed to ensure the public has the right to access said copyrighted works without authorization, including but not limited to the right to the unauthorized access for personal use and how public domain is extended to all works.
This notion originates from France's copyright law, from which all copyright laws in the world directly or indirectly comes from. We are talking about centuries of legal history.
I was alluding to the lack of Software Patent and Copyright enforcement in some jurisdictions, and hoping people would connect the issue of isomorphic plagiarism on their own.
We are in the age of "Napster" for nonsense, and "free" stuff other people made is certainly a crowd-pleaser. =3
For example, you could least feel that the world is large enough to have people with other needs, drives and ownership levels of their work.
You could also consider that this is not an even trade; artists had all their works ingested and didn’t get a commensurate stake in openAI.
You can consider that you had a choice to share when you contributed to open source. Then imagine how a counter culture artist, who despises corporate culture, must feel to have their work consumed by another rapacious tech entity.
Or you can be the filmmaker whose clients are now showing up with entire ad clips, and then decide they would rather not spend the money on CGI to complete the video - essentially demolishing work overnight.
This isn’t to say that there are not artists who are excited by this, or artist who are happy to have their art ingested. Just that the way you phrased your question evoked this answer.
Speaking as someone who works in the industry, I haven't really heard this sentiment. Artists are predominantly hostile to diffusion models, but optimistic about LLMs and their ability to help them write tools and scripts even if they're non-technical.
Yeah, I can understand being upset with their work being stolen to train these models. Anthropic doesn't seem to be working on image/video generation, but they are still training on text-based creative works of questionable sourcing.
Makes me think that there's some room in the model lineup for one that doesn't do as well on benchmarks, but is trained on "ethically sourced" data (though they'd need to somehow prove that they aren't "accidentally" including other data).
People have values that go beyond wealth and fame. Some people care about things like personal agency, respect and deference, etc.
If someone were on vacation and came home to learn that their neighbor had allowed some friends stay in the empty house, we would often expect some kind of outrage regardless of whether there had been specific damage or wear to the home.
Culturally, people have deeply set ideas about what's theirs, and feel like they deserve some say over how their things are used and by whom. Even those that are very generous and want their things be widely shared usually want to have have some voice in making that come to be.
If I were a creative I would avoid seeing any work I am not legally allowed to get inspired by, why install furniture into my brain I can't sit on? I see this kind of IP protection as poisoned grounds, can't do anything on top of it.
ZIRP ended, its remaining monopoly money has been burnt through, and the projected economy is looking bleak. We're now in the phase where everything that can be monetized is being monetized in every way that can be managed.
Free tiers evaporate. Fees appear everywhere. Ads appear everywhere, even where it was implied they wouldn't. The lemons must be squeezed.
And because everybody of relevance is in that mode, there's little competitive pressure to provide a specific rationale for a specific scheme. For the next few years, that's all the justification that there needs to be.
If that looks use-italics "really legitimate" to you, then you might be easily scammed. I'm not saying they're not legitimate, but nothing that you shared is a strong signal of legitimacy.
It would take a perhaps a few hundred dollars a month to maintain a business that looked exactly like this, and maybe a couple thousand to buy one that somebody else had aged ahead of time. You wouldn't have to have any actual operations. Just continuously filed corporate papers, a simple brochure website, and a couple virtual office accounts in places so dense that people don't know the virtual address sites by heart.
Old advice, but be careful believing what you encounter on the internet!
Don't be rude. "Real person" here might live in any country of the world.
And also, why extension for vpn? I live in country where almost everybody uses vpn just to watch YouTube and read twitter, and none of my friends uses some strange extensions. There are open source software for that - from real vpn like wireguard, to proxy software like nekoray/v2raytun. Browser extension is the last thing I would install to be private.
> What, there's an issue because I'm not being underhanded about it like [that] guy?
Wow you’ve put something into words here I never consciously realized is an unwritten rule. Sounds silly but yea you’re 100% right; that seems to be exactly the game we play.
> you'll have a better shot at dragging an actual person in front of a judge than for 99% of the other crap that's on the chrome web store
Based on what? The same instinct that told you having an address and phone number makes an entity legitimate? The chance the people behind this company live in the US is incredibly low. And even if they do live in the US what exactly would they be getting charged with and who would care enough to charge them?
> I feel like I'm being suppressed somehow. Is this belief justified based on my experience?
Imagine you saw a question like this posed at the beginning of an essay or work of fiction. 99% of the time, that essay would be a wild and delightful trip through paranoied interpretation. In fact, it would be really unusual and boring were it just to dismiss the idea this hot lead immediately after it was poised.
Well, LLM's are just improv partners in essay or story writing, not therapists or confidant, and you gave that improv partner an easy volley to ran with im writing a paranoia story.
If you really need to use an LLM to find insight and advice (you really should avoid that), never give it scintillating leading questions like what you posed here. Instead, use neutral open questions that suggest as little as possible, and introduce only the more boring ideas when they need to be leading at all. When you fail to do that, you're just inviting it to play out your own dark fantasies. And while that may feel validating and clarifying, it's going to be sending you deeper into your own imagination and farther away from solutions and reality.
Please use these things responsibly, if you have to use them at all.
The Claude the industry needs is one that responds to that prompt with questions about scope and intent, and challenges its only-suitable-for-tutorials design ideas rather than obediently delivering a "90% finished product".
10 years ago, this basically marks the difference between hiring some dude on Fiverr for $400 and an actual engineer or agency who might help you figure out what the heck you're trying to do and point you in some sane direction towards it.
I appreciate this article for sharing what kind of experience people can expect from Claude right now, but it mostly demonstrates that code assistants remain most useful in the hands of experts who are careful what to ask for, and largely misleading and slop-amplifying for people who don't.
It'd be an interesting follow up to have one of my kids give me prompts to make the same application and see how well it does. As in, when it doesn't save and they say "it's not working." How would it react and try to problem solve.
Claude Code's Plan Mode increasingly does a (small-scale) version of this - it will research your codebase and come back to you with a set of clarifying questions and design decisions before presenting its implementation plan.
> LLMs a lot of the ambiguity of HTML disappears as far as a scraper is concerned
The more effective way to think about it is that "the ambiguity" silently gets blended into the data. It might disappear from superficial inspection, but it's not gone.
The LLM is essentially just doing educated guesswork without leaving a consistent or thorough audit trail. This is a fairly novel capability and there are times where this can be sufficient, so I don't mean to understate it.
But it's a different thing than making ambiguity "disappear" when it comes to systems that actually need true accuracy, specificity, and non-ambiguity.
Where it matters, there's no substitute for "very explicit structured data" and never really can be.
Disappear might be an extremely strong word here, but yeah as you said as the delta closes between what a human user and an AI user are able to interpret from the same text, it becomes good enough for some nines of cases. Even if on paper it became mathematically "good enough" for high-risk cases like medical or government data structured data will still have a lot of value. I just think more and more structured data is going to be cleaned up from unstructured data except for those higher precision cases.
> This kind of work seems like a great use case for AI assisted programming
Always check your assumptions!
You might be thinking of it as a good task because it seems like some kind of translation of words from one language to another, and that's one of the classes of language transformations that LLM's can do a better job at than any prior automated tool.
And when we're talking about an LLM translating the gist of some English prose to French, for a human to critically interpret in an informal setting (i.e not something like diplomacy or law or poetry), it can work pretty well. LLM's introduce errors when doing this kind of thing, but the broader context of how the target prose is being used is very forgiving to those kinds of errors. The human reader can generally discount what doesn't make sense, redundancy across statements of the prose can reduce ambiguity or give insight to intent, the reader may be able to interactively probe for clarifications or validations, the stakes are intentionally low, etc
And for some kinds of code-to-code transforms, code-focused LLM's can make this work okay too. But here, you need a broader context that's either very forgiving (like the prose translation) or that's automatically verifiable, so that the LLM can work its way to the right transform through iteration.
But the transform you're trying to do doesn't easily satisfy either of those contexts. You have very strict structural, layout, and design expectations that you want to replicate in the later work and even small "mistranslations" will be visually or sometimes even functionally intolerable. And without something like a graphic or DOM snapshot to verify the output with, you can't aim for the iterative approach very effectively.
TLDR; what you're trying to do is not inherently a great use case. It's actually a poor one that can maybe be made workable through expert handling of the tool. That's why you've been finding it difficult and unnatural.
If your ultimate goal is to improve your expertise with LLM's so that you can apply them to challenging use cases like this, then it's a good learning opportunity for you and a lot of the advice in other comments is great. The most key factor being to have some kind of test goal that the tool can use for verify its work until it strikes gold.
On the other hand, if your ultimate goal is to just get your rewrite done efficiently and its not an enormous volume of code, you probably just want to do it yourself or find one of our many now-underemployed humans to help you. Without expertise that you don't yet have, and some non-trivial overhead of preparatory labor (for making verification targets), the tool is not well-suited to the work.
- Compatible with all backup systems and most version control systems
Have you considered that stuff like this is already "more productive" for fluent users than almost any alternative could be?
Somewhere along the line, product people started to mistake following design trends and adding complexity for productivity, forgetting that delivering the right combination of fluency, stability, simiplicity are often the real road to maximizing it.
The portability thing can't be stressed more. It took me ages to liberate my notes from onenote cloud when I moved over to obsidian. Which is of course exactly the point of Microsoft's.
A lot of artists who would love to be able to direct their professional software in natural language have to reconcile that with how this technology came to be and what the aims are of the company now delivering it to them.
reply