Or did they hire a team of cybersecurity specialists with the vast amount of funding at their disposal? I don't think its reasonable to assume they used none of their other resources to search for something that could be a very profitable marketing campaign.
Years ago, this line formed in my head, and has stuck around - it has been long enough that I can't remember if I read it somewhere or if I came up with it myself, but I think it's relevant here:
"There are only two ways to find good new music - listen to a lot of bad new music, or outsource your listening choices to someone else - and the second doesn't protect you against the first."
Outsourcing your listening choices can look like lots of different things: that friend who goes to lots of concerts and always has an amazing new band they've heard recently, radio DJs, algorithmic suggestions like Pandora or Spotify, the Billboard Top 100, your local bar's live band choices, the Grammy Awards, going to clubs where DJs play new music, etc - but ultimately they come down to the same thing, letting someone else decide what you listen to.
And while my pithy version mentions "bad new music", included in there is anything which is not "good new music", including lots of mediocre or inoffensive stuff which doesn't rise to the level of being "good".
I first thought about it in the context of music, as I was looking for new songs to choreograph to, but it's true of discovering any new products where the quality is a matter of taste or subjective assessment.
- Want to find new food you like? You either eat lots of weird foods, or you find someone (a friend, a food blogger, the NYT food reviews, your mum, anyone) to recommend you try something they've discovered.
- Want to read a good new book? Either pick up random books, most of which will be trash, until you find something you like, or find someone to filter down the books (a small bookshop which carefully curates its titles, a library's recommended reading list, the best sellers lists, Oprah's book club, etc).
- New TV shows? Watch many bad shows until you find a good one, or wait for recommendations or awards nights.
- Restaurants, clothing designers, shopping malls, Youtube channels, content creators, movies, directors, websites, etc - the story is the same.
The only places where this does not apply, is in contexts which have objective measures which can be used as filters: if you want a new monitor, you can go to any store and filter or sort the options they have by objective measures like "display size", "resolution", "response time", "weight", "connectivity" etc, and find new products which meet the criteria. This is still dependent on someone to go and collate the information about all the products, but you are not forced to try lots of incorrectly-sized monitors to find one which optimises your preferences. Similar for microcontrollers, CPUs, car trailers, light bulbs, etc.
But even things with objective measures often have subjective qualities which have to be assessed - you can filter laptops on weight, RAM, clock speed, and storage, but how it feels to hold, whether the keys have a nice feel, whether the machine overheats too quickly - so you're often back to the original observation on these matters too.
Well, yeah, it's all subjective - and actually quite tenuous - so you won't know good and bad until you actually make the call on it. Maybe you've had the experience even of coming around on some music you previously thought was bad.
Or like: one time I listened to a bunch of new music I had dug up and wasn't sure there was anything I liked. Two days later, I had a song in my head. Turned out to be one of the ones I had listened to. But I had to listen to everything all over again to find it! ദി(ㅠ﹏ㅠ) Glad I did - there were other gems in there.
> With their ability to shapeshift and manipulate delicate objects, soft robots could work as medical implants, deliver drugs inside the body and help explore dangerous environments.
I'm not sure that's a big strike against it yet. Kinda the whole point of engineering in academia is to work on hard things that are far from commercialization.
The fact that a product has not yet been created from a given technology does not mean the technology or the research itself is useless, or will not turn out to be useful in the long term. You can also learn a lot from research or development that does not ultimately work out.
>>"never once seen a productionalized version of these"
YET
Just because we have not YET seen one does not mean it should not be pursued.
Examples are endless, start with: 30 years ago, no one had seen a solar panel with 25% efficiency produced for less than $1/watt. Now, it is the most economical and fastest-growing and most sustainable energy source on the planet.
That argument is simply an argument against all efforts at making progress. Perhaps rethink making it?
Assistance of other humans? You do realise we're talking about an intelligence test right, at that point what are you even testing for. I'm sure you've taken exams where you couldn't bring your own notes, use Google or get help from someone, even though real life doesn't have those constraints
If they really believed that their process eliminated any licensing conditions, why would they limit themselves to open source projects?
High quality decompilers have existed for a long time, and there's a lot more value in making a cleanroom implementation of Photoshop or Office than of Redis or Linux. Why go after such a small market?
I suspect the answer us that they don't believe it's legal, they just think that they can get away with it because they're less likely to get sued.
(I really suspect that they don't believe that at all, and it's all just a really good satire - after all, they blatantly called the company "EvilCorp" in Latin.)
Almost the entirety of the technology world is English-speaking, not English-native.
Pretending that it's English-native is why there's unspoken incentives to sound more "native", and thus use these grammar-correcting tools.
Some of the intelligent comments on here come from people who learned English in recent months or years, rather than in childhood.
Their English isn't always fluent or well-structured. If they rely slightly more heavily on suggested-next-word tools or AI translations, is that a reason to exclude them from the conversation?
Conversely, many English learning resources for non-native speakers focus on strict formal language, similar to AI-generated text. Do we risk excluding people who have learned a style more formal than we're used to?
One physical robot with four wheels, a camera, and a 101 up/down "fingers" to match the keyboard can roll between physical machines and type on mechanical hardware keyboards. This brings the ceiling of how many accounts you can control down to the number of computers you have, but that's not a high price to pay.
It doesn’t stop people posting AI slop, it stops people from posting AI slop more than once. If you ban somebody for spamming today, they just create a new account and keep on spamming. If you can determine they are the same person you banned before using verifiable credentials, it makes the ban actually effective.
Layer on captchas. It won't completely stop slop but it's an incentive against slop flooding. And I mean, nothing is stopping a human from just going into ChatGPT by hand and asking for output and copy/pasting that into an HN post box.
On what grounds can we assume that? That's what the marketing department wants us to assume, but what makes us even suspect that that's what they did?
reply