The other person commenting does have a point, if you're inviting a bunch of attendees from the 'global south' nations, there's many places in South America, Africa, where it is difficult, complicated and by no means guaranteed to get a Schengen zone visa. You can't just "go there".
Ah, good point, a number of the people going to RightsCon would by definition be coming from countries where this was difficult. I withdraw 'n apologise.
- Claude Team plan is limited to 150 now[1]. You can mix between Premium (equivalent of Max 5x) and standard (Pro like)
- ChatGPT business is not user count limited (AFAIK) however business plan only supports the equivalent of Plus there is no Pro equivalent.
For larger organizations it will still come down to usage billing . The ChatGPT plan is quite limiting for most users and the only option is usage/credit billing for more consumption. Claude has the more generous usage in Team but limited to 150 users.
This might actually be quite nice - the Blender Python API is currently very useful and very touchy. Lots of differences in behavior in headless mode which are hard to debug (because you can't open the GUI to see what's happening, because that changes the behavior).
Yes the blender API feels like it sits on top of the GUI rather than the GUI on top of the API. When you are writing scripts in the blender api you basically mechanically describe the steps you would take in the UI. It can be a little fragile at times.
I've used Claude to write some blender scripts and it's an excellent use case. I look forward to even better claude/blender interaction based on this annonuncement.
I've also used genAI to write script. It works splendid up to a point, then there is absolutely no way to move the needle further. And it's not even close to renders I would ever publish.
That being said, it's about the same for the code it produces for non purely creative things, but for artistic work, I doubt an LLM in between gives any gain. After all, we do have an interface. A human interface.
Artists mad about AI art ought to welcome this. This is about making art tools better, instead of replacing them entirely. The alternative to this is AI just generating art directly and making tools like Blender obsolete.
Art generators need to come a long way to completely replace art tools. I dabble, but if I were doing real work with it, there have been times it would have been faster to composite in a 3D model rather than keep trying to prompt an image generator into fixing something.
This is what I do. It’s been really helpful for taking existing FBX files and handing them off to the agent + Python Blender API to analyze the geometry, convert to GLBs, etc.
What model you using? With codex and gpt-5.4 set to xhigh (and now gpt-5.5) seems to have zero issues helping me with rigging and fixing glb/fbx models, works as a charm. One time I instructed it to iterate together with screenshots because it was a gnarly task, but usually it figures out everything even when headless.
I disagree that anyone should need LLMs for Blender, for example, because Blender is designed by people to be understood and used by people, even if it requires a learning curve. It sounds a bit dangerous to build new things we don't understand, or worse, reduce our understanding of what we currently use because (only after studying our use of the same technology) an LLM apears able to replicate it, mostly.
I'm reminded of Sam Altman's performative helplessness on Jimmy Kimmel, when he described being unable believe a baby without ChatGPT. That's something I believe humanity has been capable of doing for a good portion of its existence, and not something we should give up to the hands of a yet-unproven, yet-unprofitable technology.
Surely there's a middle ground where improved APIs can be leveraged by both people and LLMs alike while keeping those APIs approachable? Why is it necessary that changing the python APIs would lead to "need[ing] LLMs for Blender"? I'm nowhere close to an AI maximalist but this criticism seems grounded in execution concerns. I'm definitely not saying that they won't mess this up and make the APIs overly complex, I just don't think that's necessarily going to be the case.
Regarding whether AI can/could overcome the hurdle of human understanding: I'm not sure if that's really a hurdle. Let's say in theory, a system was crafted by AI to be interacted with exclusively by AI. Broadly, I assume the outcome of the system would be for people, and it would have some purpose or value. Now my question is: how do we verify it functions? If it is a black box that nobody understands, then we can't verify it at all, and we can't debug it if there's something wrong with it. We circle back to the human understanding issue.
(I'm sorry if my tangent about Altman was taken as a personal affront, as I did not mean it to be that. It just muddied the two interesting topics you brought up.)
Not everything is abstract art. Sometimes I want my subsurf modifier to only target certain vertex groups, and if I can use AI to make that happen in a few seconds, that's a huge win for me.
Blender (and CAD programs as well) get in the way of creativity.
I know what I want, no idea how to tool my way there.
I spend two months going through YT tutorials, mucking about in Blender in order to figure out how to put together the model I have in my head [1].
(A year later, a new project idea—and it's back to YouTube because the learning is not only a steep curve but also sometimes so esoteric that it's fleeting.)
Absolutely agree - I was not impressed, but it will be a lot easier to work with the tooling without a 10 month crash-course on UI and 3D terminology if I can ask for what I want in plain language instead of knowing which button buried three levels deep to press to get my desired results.
Frankly, I love the idea of an automation engine printing out tangible works. I actually build spritesheets that way! Load a bunch of individual gimp files as layers, set them offset by a given parameter, and boom, done!
Would be rad to incorporate some statistical procedurally generated designs based on my own aparatus.
What I do not want to see is this realm of LLMs hijacking decades of hard work and consideration for integration channels to more tailor towards their LLMs, not for the diligent engineer.
If they want to put their tentacles as far as they want while making products more difficult to work with innovation of a different color, they are making enemies out of, at least me.
Honestly, I think this is a stepping stone towards replacing industry CAD modeling tools.
AI _can_ work with 3D models already, but it's really bad at it. CAD requires an extra level of control and I think this is where I could see AI companies wanting to get a foot in the door.
e.g "Let's build an adapter between 2in BSP Male and 3/4in NPT Female threads with a third Hose Barb outlet with the following properties..."
Years ago I had found his channel and liked it, and then some video (I don’t remember the title or even subject at this time) came up on a subject I happened to know quite a bit about. He got to some point in an explanation, bungled it, and then hand-waved it away, saying the details were unimportant.
Stopped caring about anything he had to say after that, and I also then realized that there was a an entire genre of “person with no actual expertise reads Wikipedia articles and explains them with good lighting and high production quality.”
Same. It was the "How The U.S. Ruined Bread" video for me after which I started watching critically and found the editing style to be over the top and makes it harder to think about the content while it's being presented. So I eventually stopped watching.
Interesting. I had a similar experience with Veritasium’s video on kinetic bombardment, where I think they dismissed the concept based on tossing bowling balls out of helicopters onto sandcastles.
Comments here should be read as opinions, not as facts. I see it every time there is a subject I know deeply about, 90%+ of the comments are either factually incorrect or just bad opinions.
Most things? That's a really strong claim, do you have anything to back it up with? Just a couple videos here and there wouldn't cut it, given how strong your claim is.
For what it's worth I watch his videos and he seems to touch on incredibly valuable topics I would never hear about otherwise, like [1].
> The quick cuts and dazzling montages, as well as the dramatic shots of Harris absorbed by a document he’s unearthed, highlighting it suspensefully in tight close-ups, all lend credence to the often-excellent work he does. But it also makes it easy to mask his mistakes. And for someone who takes journalism to heart, his mistakes are big, leading to oversimplification and an occasional lapse in skepticism.
[...]
> In a video that garnered 8.5 million views and which Harris thumbnailed with the words “WE HAVE PROOF,” Harris explores the recent craze over UFO sightings—sorry, UAP sightings, meaning unexplained anomalous phenomena. In passing, he mentions Mick West, who has done excellent work debunking a lot of blurry footage of what is alleged to be high-tech spy drones or aliens.
> But the bulk of the video is spent leering at report after report—a total of 144 are being investigated by the U.S. government right now!—while original music amps up the mystery. The emphasis on evidence over context is key to Harris’ style: flood the space with visuals that keep your attention and elicit questions and only occasionally pull back to explain.
Have you tried creating a Google account without a mobile phone number from a public computer? Go ahead, I’ll wait.
Ok, so assuming he doesn’t want to spend $500+ for a mobile phone, he’s looking at an Android. Then, when he logs into a Google account, Google hoovers up his location, his associated credit card (if he has one, what if he does not and does not want one?), and countless other personal metadata at the very least that will likely never go away. Even if he does suddenly go from no smartphone to being a savvy personal steward of his digital privacy, you can bet that Google is scrambling to capture as much as possible, at all times, about its users’ personal lives and data.
- If he doesn’t want a Google account, /just/ create a new one
- If he doesn’t have a credit card, /just/ use a family member’s
- If he has Parkinson’s and can’t use touch input, /just/ have a friend do it
- etc.
The question is not whether these obstacles can be overcome (trivially, by “normals”). The question is whether we want these to be the default requirements for basic participation in society. And it’s a completely legitimate question.
I have a friend who is legally blind. He has the font size turned up several notches on his phone so he can just about see it with a huge magnifier on his remaining eye. A majority of the apps on his phone are absolutely not designed for increased font size and are a nightmare to navigate.
Then he can't buy the tickets. People aren't born with a god-given right to get seasons tickets to Dodger's games. There are businesses that choose not to handle cash and only accept credit or debit payments. I need to agree to a credit card companies terms and conditions for that too. Is that unreasonable?
The key message that poster before tried to convey was
that they themselves do not believe into their own products,
not that rich kids are privileged royal kings today. This
ties into e. g. Facebook trying to addict people into using
it - infinite scrolling as an example. The latter can be
quite a problem on youtube or people using smartphones while
riding in a subway, jumping from pointless video to pointless
video - this is quite addictive.
reply