I originally built it to track competitive moves in a specific sector. Turns out sales teams and recruiters get the most value from it as new exec hires are a strong outreach signal
Has someone found that they have better alcohol "recovery" if they are pursuing intense or at least regular(at least 3x a week) stamina building exercise like running/swimming?
As someone said we live in a strange but amazing era, where although it has never been easier to be deceived, but its _also_ much easier to uncover said deception especially on the internet.
Or at least think you've uncovered deception. It's not clear to me yet that any of these "AI detectors" are reliable, and if they are, it's just an arms race.
For any one who has not read the cockpit recording of air-france-447 I would encourage them to[1]. It is simply jaw dropping study in how things go wrong so fast — a risk with AI we have barely begun to acknowledge let alone regulate as a community.
That's the one where one of the pilots pulled up the entire time, ignoring an alarm literally blaring the word "stall" for 2 minutes.
The poor captain found out I the last 10s what he had been doing but it was too late.
A couple accidents occurred largely due to Airbus averaging conflicting inputs with nothing more than a small warning light when it occurred. I'm pretty sure they would have gotten the Boeing treatment if social media was more entrenched at the time.
A bit more complicated, as the aircraft itself was unable to detect the stall conditions due to icing of the pitot tubes so the warning itself was in and out several times. Clearly the copilots did not understand the situation so an inconsistent alarm could be seen as spurious or a secondary effect.
> At the same time he made an abrupt nose-up input on the side-stick, an action that was unnecessary and excessive under the circumstances. The aircraft's stall warning sounded briefly twice due to the angle of attack tolerance being exceeded
...
> The crew's lack of response to the stall warning, whether due to a failure to identify the aural warning, to the transience of the stall warnings that could have been considered spurious, to the absence of any visual information that could confirm that the aircraft was approaching stall after losing the characteristic speeds, to confusing stall-related buffet for overspeed-related buffet, to the indications by the flight director that might have confirmed the crew's mistaken view of their actions, or to difficulty in identifying and understanding the implications of the switch to alternate law, which does not protect the angle of attack.
Its a complicated interplay of systems, where autonomous control systems are changing modes and receiving bad information during a complex, raplidly developing situation.
>A bit more complicated, as the aircraft itself was unable to detect the stall conditions due to icing of the pitot tubes so the warning itself was in and out several times.
74 times the stall warning blared [1]
Of the 3 pilots in the cockpit, only one thought he had to pull up, see page 31, unfortunately he was one of the ones in control.
>raplidly developing situation.
It was the same situation from when it began to the end, stuck pitot tubes. Though the stall warning only started blaring when the pilot stalled the plane. Bad airspeed indicators don't stall the plane, and are something pilots are supposed to be able to handle, that's why 2 of the 3 were shocked one did the exact opposite in the situation.
It was pilot error. Just look at the report, every finding starts with "the Crew". Planes aren't supposed to crash into the ground just because an air speed sensor failed.
I read through the link. The other pilot and the captain are complicit by the virtue of being there. Autopilot disengages at 2:10 and they crash at 2:14. Terrible.
My other immediate thought -- Tesla's autopilot. I've never used it so I'm not sure I'm fully correct here, but apparently it requires you to be vigilant and take over in certain situations? Wonder how well that works out in practice.
In practice, there's a camera in the Tesla that looks at the driver to make sure they're paying attention. If they're not, perhaps fiddling with their phone or looking at something in the passenger's seat, then the system gives a warning and then a strike. Get five strikes and you can't use FSD for the next week or two. So drivers are directly incentivized to keep their eyes on the road because if they don't, they can't actually use the system which would suck for a long road trip.
Everyday I sit down to build a product for my clients. I am a one man shop _now_. Before I had people helping me. My mental state is not good.
A very odd thing happens when claude or codex complete code fast, I begin to think of all the other things that are needed to make AI Agent work better. I begin to worry about problems that other people use to help me with and think "Can I do those too?". Problems like product design, devops work etc. In a bid to try I get nerd sniped by the velocity people seem to have — and these are respected devs not just twitter claims. And because I am so bad at "doing it all" its causing my mental health to suffer because of long hours i have to put it in. I miss my friends and colleagues who I worked with.
I always struggled with coding before 2023, but i made ends meet and put food on the table and could work sane hours and knew what I needed to do. Logically I should have been happy that I did not have to grind on code — and some days I truly am — but it would yield such poor quality of life at such a high cost was not what I expected...
For course correction, I began with trying to think a bit more about solving problems for my clients by talking to them more often. That helps to some extent because I feel happy talking to them for understanding how to solve their problems.
What I do feel the issue is with I just having to do everything to keep costs down because hiring another dev vs doing it with AI consideration is real and it has collateral damage: I spend more time trying to build AI agents to do the work and there is 1 or 2 fewer jobs I create.
The issue is not just age verification but also device pinning.
I think the framework here is to have community driven age verifiers( i recall there is an EU effort for digital wallets which besides it's bad parts has some of these good parts) which can verify ages for people and link them to( local biometrically encrypted) devices for pinning. This would be privacy preserving. The only downside is a mandate for all devices have a built-in hardware biometric encryption like a finger/face print so phones can't be just(used) with these apps installed.
The verification part is a job that could be done by all the teachers and coaches and ofc parents. Any one verifying identities would be cryptographically nominated/revoked by a number of more senior members of the community. A prent always get the right to say ok for their kid ofc but so could teachers or legal guardians..
We(legally) need a mandate for smart devices to have local device only biometric verification. The law should be to have these apps follow device app store protocols.
> Surveys by Britain’s tech regulator, Ofcom, find that among children aged 10-12, over half use Snapchat, more than 60% TikTok and more than 70% WhatsApp. All three apps have a notional minimum age of 13.
I think getting the age thing correct is key to get parental classification to work properly(I think now platforms just ask for a birth date which is lame) e.g
> Surveys by Britain’s tech regulator, Ofcom, find that among children aged 10-12, over half use Snapchat, more than 60% TikTok and more than 70% WhatsApp. All three apps have a notional minimum age of 13: https://archive.ph/y3pQO
Once you get the classification correct — and AI cannot it do this — only via community ombudsman/age verifiers, in a privacy first way*, the app stores can easily tell the app devs what accounts are sensitive and filtering should be much more effective.
*Basically once your age is verified by a real human for your device(using device local encryption to verify biometrics) you are set. No kid should be able to bypass and install apps it on devices that their parents hand to them. There will always be black market devices with these apps, but there are ways of beating those to be very minimal by existing tech.
Why do you need any third parties whatsoever? Just have the parents do it. They configure a setting in the kid's device which the device uses to determine what content to display. All you need from the app/service is a rating for the content. No third parties should never have to know anything about the user, because the user's device knows that, and the device knows it because the parents do.
This all depends on fantasy tech and/or totalitarian control of tech.
Who verifies that the person verifying the child's age is actually authorised to do that? Who verifies that verification? And so on up. This needs a chain of trust that can only end up at government. And that chain of trust will then be open to being abused by shitty politicians.
What mechanism in (e.g) Linux is responsible for implementing this age verification so that it cannot be tampered with (or trivially overruled by a sudo call)? Which organisation is legally liable if that mechanism doesn't do its job? How can we stop someone from overwriting that mechanism with their own, in an open OS that is deliberately designed to allow anyone with root to change anything on it?
What you propose here is the death of open computing. And I personally believe that we would be much better off as a species if we kept open computing and just taught our kids how to handle social media better.
> What mechanism in (e.g) Linux is responsible for implementing this age verification so that it cannot be tampered with (or trivially overruled by a sudo call)? Which organisation is legally liable if that mechanism doesn't do its job? How can we stop someone from overwriting that mechanism with their own, in an open OS that is deliberately designed to allow anyone with root to change anything on it?
This one is easy. You just don't require all devices to do that. The parent isn't required to give the kid a general purpose computer. You don't need to prevent every device from running DOOM, only one device, and then parents who want to impose such restrictions get the kid one of those.
- The line between "general purpose computer" and "not that" is weird. Android is an implementation of Linux, after all. Probably the best example is a Steam Deck. It's just Arch Linux, you can get to a desktop on it no problem, and you get sudo access and can install whatever you like on it. Are you saying that Responsible Parents should not get their kids a Steam Deck?
- And that raises the point of how responsible are we making parents for technical decisions that they do not necessarily have the knowledge to implement? If a child works out how to circumvent the age restriction and look at boobies (or whatever) and an authority finds out, are the parents liable? Are they likely to be prosecuted? Isn't this just adding more burden and bureaucracy to the job of parenting?
> Are you saying that Responsible Parents should not get their kids a Steam Deck?
I'm saying Authoritarian Parents should not get their kids a Steam Deck. If the kid can run arbitrary code then they can get a VPN and access websites hosted in Eastern Europe and then any of this is moot because there is no law you can impose on Facebook to do anything about it.
> If a child works out how to circumvent the age restriction and look at boobies (or whatever) and an authority finds out, are the parents liable?
No, because the parents rather than the "authorities" (who TF is that anyway?) should be the ones in charge of the decision whether the kid can look at boobies to begin with.
The devices that offer a mode that blocks all unapproved content are presumably going to advertise it. If you buy something that doesn't say it has anything like that, and then it doesn't, that's the expected result. If you buy a device that says it does and then it doesn't, now you have a bone to pick with the OEM.
As long as we continue to value making money for shareholders above all else, such and possibly worst perversions will continue to happen. Capital has found all sorts of ways to make all sorts of questionable things addictive to sell.
I feel, and it's obvious to most that the only way a society can truly reform is by a shared consensus over their value system. This verdict could be thrown out by the appelette court(i feel it would be), so this is not the culmination of values resulting in what many hoped for.
It does not seem to me that this is a country where consensus on what, if anything, to put above capital will come about any time soon and with capital it's always been ask for forgiveness rather than permission.
The only time true justice that happens is when the harm becomes obvious being the shadow of a doubt(e.g. smoking) that even a monkey can tell it's time, game is up.
Perhaps if one day we can look into the brains of people with the clarity of glass and the precision of electrons and tell, will that time come when we all recognize how bad of an idea social media was.
reply