Hacker Newsnew | past | comments | ask | show | jobs | submit | lelanthran's commentslogin

> It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources.

I dunno how you work, but I'd be getting raised eyebrows from people watching my hit google for any question required of my role.

I mean, we're not talking about using calculators here, and we're not talking about vocational training (How do I do $FOO, in docker? In K8s? How do I write a GH runner? Basically any question that involves some million-dollar company's product).

We're talking about college stuff; you absolutely should not be allowed to look up linked lists for the first time during an exam, copy the implementation from wikipedia, port it to your language and move on.

In the real world, we want people who mostly know what to do. The real world is time-constrained (you could spend 2 hours learning to do what they thought you could do based on your diploma, but they'd be pissed to find out that you need to look up everything because that's how you coasted through college).

Exam situations are more like the real world than take-home assignments: High-stakes, high-pressure, timeboxed.

If your real world does not have high-stakes, high-pressure, timeboxed tasks, then you really haven't had much contact outside of your bubble.


You don't see any way that this is different?

> Claude Design is just a big opinionated prompt: https://www.lobsterpack.com/blog/claude-design-trenchcoat/.

Do you think anyone expected it to be something other than a prompt with skills, tools, etc?

Also, you certainly got some brass, linking an entirely AI generated article in a forum where extreme distaste is registered for entirely AI generated posts.

How is your comment not downvoted to oblivion?


Sure, the blog post is AI generated (I'm not a native English speaker, our bunch is often shy about our English language skills), yet the research I've done there is manual.

I found it interesting that (contrary to the popular opinion) there wasn't some magic, e.g. a novel model, happening with Claude Design, especially magic enough that Figma wouldn't be able to replicate if they felt like it.

Also, apparently human (not artificial) neurons were behind that huge prompt, very well aware of the limitations of the model, cheating here and there to make Design's outputs more impressive, making it "create a (design) plan" beforehand, i.e. all the stuff that we the common laymen could do ourselves with the same tools.


> Also, you certainly got some brass, linking an entirely AI generated article in a forum where extreme distaste is registered for entirely AI generated posts.

> How is your comment not downvoted to oblivion?

I'm sure there's a polite way to say things.

I heavily dislike LLM content, but if you read the content, it's actually got information of value.


> I'm sure there's a polite way to say things.

That was the polite way of saying things. The phrase "if you couldn't be bothered to write it why would anyone bother to read it" was a saying from usenet times.

The truth is it took the author less time to "write" that piece than you to read it. It's a blog. There's no deadline, and yet they couldn't take the time to actually type out their own thoughts.

> I heavily dislike LLM content, but if you read the content, it's actually got information of value.

If it was so valuable the author would have written it themselves.


I've seen the front end devs get PSD files as recently as 2021.

Why would they have to do it themselves?

> Unfortunately our knowledge of language APIs and syntax has diminished in value, but we have so many more skills that will be just as valuable as ever.

There were always jobs that required those "many more skills" but didn't require any programming skills.

We call those people Business Analysts and you could have been doing it for decades now. You didn't, because those jobs paid half what a decent/average programmer made.

Now you are willingly jumping into that position without realising that the lag between your value (i.e. half your salary, or less) would eventually disappear.


I guess we will need to wait and see if AI can remove ALL of the complexity that requires a software engineer over a business analyst. I can't currently believe that it will. BA's I've worked with vary in technical capability from 'having coded before and understanding DB schema basics and network architecture' to 'I know how the business works but nothing about computers'. If we got to the point in the future where every computer system ran on the same frameworks in the same way, and AI understood it perfectly, then maybe. But while AI is a probabilistic technology manipulating deterministic systems, we will always need people to understand whats going on, and whether they write a lot of code or not, they will be engineers, not analysts. Whether it's more or less of those people, we will see.

> If we got to the point in the future where every computer system ran on the same frameworks in the same way, and AI understood it perfectly, then maybe.

They don't need to all run on the same frameworks, they just need to run on documented frameworks.

What possible value can you bring to a BA?

The system topology (say, if the backend was microservices vs Lambda vs something-else)? The LLM can explain to the BA what their options are, and the impact of those options.

The framework being used (Vue, or React, or something else)? The AI can directly twiddle that for the BA.

Solving a problem? If the observability is setup, the LLM can pinpoint almost all the problems too,and with a separate UAT or failover-type replica, can repro, edit, build, deploy and test faster than you can.

Like I already said, if[1] you're now able to build or enhance a system without actually needing programming skills, why are you excited about that? You could always do that. It's just that it pays half what programming skills gets you.

You (and many others who boast about not writing code since $DATE) appear to be willingly moving to a role that already pays less, and will pay even less once the candidates for that role double (because now all you programmers are shifting towards it).

It's supply and demand, that's all.

--------------

[1] That's a very big "If", I think. However, the programmers who are so glad to not program appear to believe that it's a very small "If", because they're the ones explaining just how far the capabilities have come in just a year, and expect the trend to continue. Of course, if the SOTA models never get better than what we have now, then, sure - your argument holds - you'll still provide value.


> So, you havent really learned anything from any teacher if you could not do it again without them?

Well, yes?

What do you think "learning" means? If you cannot do something without the teacher, you haven't learned that thing.


> Harking back to the old days of buying clock time on a mainframe except you're getting it for free for a while.

I submitted this yesterday but it got no traction (I did not write it): https://www.mjeggleton.com/blog/AIs-mainframe-moment


>> The companies that are entirely AI-dependent may need to raise prices dramatically as AI prices go up.

> It's not that clear. Sure, hardware prices are going up due to the extremely tight supply, but AI models are also improving quickly to the point where a cheap mid-level model today does what the frontier model did a year ago.

I agree; I got some coding value out of Qwen for $10/m (unlimited tokens); a nice harness (and some tight coding practices) lowers the distance between SOTA and 6mo second-tier models.

If I can get 80% of the way to Anthropic's or OpenAI's SOTA models using 10$/m with unlimited tokens, guess what I am going to do...


GitHub Copilot is already $10 and I don't even use up the requests every month, it's the most bang for buck LLM service I've used.

Until May

What’s happening in May?

Github Copilot switches all users from per prompt to per token billing

> Buying oranges for $1 and selling for $0.5 is an investment into acquiring market share and customer relationships

It's a delusion that customers are going to remain with the behemoths when a Qwen model run by an independent is $10/m, unlimited usage.

This is not a market that can be locked-in with network effects, and the current highly-invested players have no moat.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: