I was previously a long term dev contractor for a stock photography company. NFTs have function in the ownership of digital assets, though I do not personally advocate for them (the company or the use of NFTs).
Per a cursory question on the Goog I got: 95% of collections are now considered worthless.
I worked at a blockchain startup back in the day and despite the PTSD of working with insane people, I appreciate the concept of the blockchain. I've yet to see any mainstream value in it. Sure bitcoin is worth a lot but it's not because of the inherent value of blockchain finance, it's because there's too much money out there and everybody loves a fat bubble asset.
I never got what would blockchain do better except maybe distribution. And even there raise questions of efficiency of having data copied in a lot of places.
Correcting data is also always fun question. Your keys are stolen, now your house is not yours anymore as record is immutable. Oh it is not actually immutable if someone says it is not (replace it with new record )? What was the point again...
The appeal to me is that it is effectively a "serverless" shared immutable ledger. By "serverless" I mean that not a service behind someone's API that one just has to trust the data emitted.
I think it could be useful for contracts and governance but the whole crypto thing is just pointless (except ostensibly for cases of transferring money), which is helpful as a tool but as an investment just strikes me as nuts. Which is why I prudently said hell no to buying bitcoin at $20 because it was a stupid fad. Sigh.
Respectfully, I am starting to find "AI will become only better in the future" to be a cheap and empty statement. Optimism is good but it does not take into consideration the tremendous nuance of the topic and current thread.
Far as AI gen images they still make me nauseous due to uncanny-valley stuff. Still see a lot of non-standard number of fingers; so much content elicits a weird double-take and gut dropping feel.
Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output, and represent the up front cost for the producer.
Both a movie and a language model can cost tens or hundreds of dollars to produce.
In both cases additional infrastructure is needed for efficient usage: movie theaters or streaming platforms for movies, and data centers with the GPUs for LLMs. This is also upfront (capex) costs.
At consumption time, the movie requires some additional resources, per viewing, whether it's a movie theater or streaming. Likewise, an llm consumes some resources at inference time. These are opex. In both cases, the marginal cost for inference/consumption is quite low.
reply