What about setting environments for normies that mitigate this problem? I don't know that you can do it on Windows, but Linux offers various tools for isolation where you can give full rights to an LLM and still be safe from certain classes of disaster.
Maybe this kind of isolation neuters the benefit you're thinking of, but I do believe some sort of solution could be reached.
I recall having a conversation with a friend that was angry at this kind of thing sometimes being called "Tail Call Optimization". It's a guarantee that your program won't literally crash by stack overflow if you recurse too much using tail calls; the fact that it might have performance benefits is secondary in importance. So the name can be a bit misleading.
Back then Rust's tail call story looked bad. The problem slipped my mind for years and now I'm suddenly made aware of this become keyword, and it seems like a great idea -- to make the developer's intent more explicit. Getting better target support for this is doing god's work.
As an aside, I would slightly argue against the idea that it's useful only for a narrow set of libraries. It's useful for the developer as a user of the language to be able to express things in either iterative or recursive style without worrying as much whether the latter is going to panic. Maybe if I had to deal with all the work of making such features functional in a language, I would value expressiveness a bit less!
Trump's second term has had dire consequences for the US that are simply not on the same level as Obama, and it could have been avoided. The democrats are also terrible, but it's a matter of harm reduction. In this case, I would strongly argue that the difference was greater than 99 and 100.
You should vote to do harm reduction. Elections happen regardless of what you do. Whether 30% or 100% of Americans vote, the winners of elections still get access to the same amount of state power. The system does not require our political participation to continue to exert control our lives. Abstaining from voting is not an effective tactic in reducing the legitimacy of the system. That tactic might work in other situations, but not in this one.
I hope you will keep your distaste for both parties, but still vote for the lesser evil, even if it's distasteful. Because I think we should help that one person not get stabbed. And if indeed you have voted for the lesser evil and my post has a tone that assumes otherwise, I apologize.
The fact that the police won't use surveillance data in a way you would consider good and effective does not mean you don't live in a surveillance state.
One of the key aspects of police and surveillance states is that the incentives are structured so that the policing and surveillance need not be done with the interest of public welfare in mind. As you said there is no political will.
If you've ever had trauma, especially recent, you'll appreciate well done content warnings. You don't want the dramatic plot twist to happen to be exactly the topic you've been trying to avoid so that you can slowly get better.
If you've experienced a certain kind of trauma, it's not a matter willpower. It involves a loss of control over one's emotional response and thoughts which can be triggered by things that relate to your trauma.
Don't knock on content warnings just because they lack rigorous evidence or because "trigger warnings" became the butt of jokes for a while. They have a genuine utility.
The problem is they are explicitly arguing that all of our best science is that trigger warnings are counter productive for getting better. Just a quick google search of 'scientific support for trigger warnings' will get you all sorts of meta analysis, RCT results, etc. on this. At best they don't seem to actually do anything, and at worst, they actively impede your ability to get better.
That doesn't mean it's a matter of willpower, but it does suggest that avoiding your triggers or trying to use trigger warnings to prepare you for dealing with them provides no benefit. Your use of the word avoid pretty much sums up the core problem here - on a personal enjoyment of day to day life level, avoiding your triggers makes perfect sense. On the long term healing and not being traumatized by them level, you don't want to do that. (Edit: This isn't to say try taking exposure therapy into your own hands and just surround yourself with the stuff. None of this is a replacement for guided therapy. But specifically going out of your way to avoid these things is 'avoidant behavior' and is pretty much universally recognized as being a bad thing when it comes to dealing with PTSD etc.)
That being said, I believe everyone should be able to disclaim what they want and that people can choose how they approach their own self-care, even if it isn't supported by the science.
Exposure and Response Prevention therapy works. You will never get fully well without exposure. However, it requires that you find stimulus of a magnitude that makes you uncomfortable, but doesn't send you outright spiraling. You need to keep steady while experiencing it for a while.
Content warnings give you the ability to estimate what intensity of negative stimulus you will experience, and this is important when dealing with actual triggers.
Not everyone is yet at the phase where they can handle a certain level of exposure. For some unfortunate cases it takes a long time to be well enough to start being able to handle exposure.
That being said, I do think content warnings need to be specific, not generic. The most useful ones are spoilers, not generic messages to put you on guard. Careful Ao3 authors do a better job at this than most games. There are technical solutions that allow interested parties to get this information without having to spoil the default audience, but we live in a busy world that has a lot of things to care about other than this.
Everything you wrote sounds really good in theory - it passes the smell test for me, and I believed it for a long time because it seemed perfectly logical. It all just Made Sense to me in an intuitive manner.
But there's pretty universal agreement that avoidant behavior isn't a good thing. There's a difference between the awful idea of trying to self-manage exposure therapy or forcing exposure and allowing yourself to be exposed to things in the manner that matches the 'real world.' If someone wants to put 'Dead Dove' on their ao3 and provide a a trigger warning because the fic is based around that thing, then yeah, that's one thing. I wouldn't recommend someone go watch Hostel if their trauma is at all related to that either. But most media that has triggering content aren't anywhere near those extremes. And obviously, if the trauma just occurred, it's a whole different thing. But if the studies that show an increase in avoidant behavior from trigger warnings are right, it's increasing a bad thing. If the studies that show a 'forbidden fruit' effect are right, then it's a negative for the proposed benefit from trigger warning proponents.
But most studies show no increase or negligible increase in avoidance for the study participants, including trauma groups. So if that's the case, they aren't doing what proponents are saying is their core benefit, either.
Meanwhile quite a few show an increase in anxiety from the warning itself, which is obviously a negative.
I'm open to the idea that there might be some effective way to do trigger warnings - more specific warnings up to spoilers, or something couching it in context of how this relates to recovery and how to manage it, etc. etc. - something along those lines. There's certainly plenty of precedent for a general idea being right and the initial implementations of it being bad. But proving that is going to come down to someone figuring it out and getting studies that show positive impact.
Before we had Trigger Warnings as a term, we had movie and game ratings that said what you'd see if you watched/played: violence, blood/gore, nudity ... steam still does this, and as long as you don't use the politically charged TW expression, no-one seems to mind. For example, "Skyrim contains Blood and Gore, Intense Violence, Sexual Themes, Use of Alcohol, and Language."
"TW 1.0" as I remember it - the first time I heard the TW term - was a thing where professors told students in advance if a lecture contained material that could upset some students, I think it started when someone teaching a course on criminal law in a law degree told students in advance "[TW:] next week we will have the lecture on the law around rape and sexual assault". Properly practiced, that's not exposure therapy that's being polite to your students (though why not put your whole syllabus up at the start of term, if you can?) It was also not intended to let you skip that topic - it's pretty important to know about if you're training in criminal law! - just to let you know in advance when it's coming up.
If you're teaching a course on the history of the British Empire in India, you're at some point going to need to cover the Bengal famine, the Amritsar massacre, the mutiny (aka. first war of independence), the practically-a-civil-war during partition, and a lot of other things. Mind you a "content note: British Empire" at the start of the course would probably cover all bases.
The choice of "trigger" that already means something in therapy was perhaps unfortunate, and nowadays I think "content warning" or even "content note" is preferred.
The real problem though was how students, who were neither trained therapists nor seemed to have consulted any, redefined and enforced their version of TW to the point that the term got tainted in the public view.
Basically, if you have anything like PTSD, you need an actual therapist not the collective hivemind of twitter (instagram these days?).
Generally agree with basically everything you wrote.
For me it's not even really political - I certainly am not aligned with the "heterodox" community that has been so actively against them. I think if people want to put trigger warnings on things, they should be able to make that choice, and people should be able to abide by them if they think they want to as well.
The issue is how it is framed as being important for helping people heal, like several people have spoken of it being important for in this thread. And I don't think the game/movie ratings ever really purported to be a part of that - indeed, it's always been more of an age appropriateness thing from my understanding.
If all of this was just "People should be able to make informed choices about the content they consume" and no one on any side was making claims about the mental health benefits for people with PTSD or similar, I think it would be a nonissue.
> Basically, if you have anything like PTSD, you need an actual therapist not the collective hivemind of twitter (instagram these days?).
100%. Far far far more likely to get through it and overcome the trauma with a good professional guiding you through the process. Social media is just going to have you doing silly things like writing gr@pe or gr*pe as if somehow using a euphemism that you already map back to the original word is helping and it wasn't originally just trying to get around content filters.
This is why I generally prefer CW/Content warning; it is basically saying "this is what this contains", instead of putting any implications of it being triggering. So CW: suicide, for example, is just for anyone who doesn't really want to read about suicide at the moment, whether it's because they want a more upbeat story or somebody they knew just died
"it does suggest that avoiding your triggers [...] provides no benefit"
This is the part I'm sceptical of. When I look this up, I mostly find articles like https://theconversation.com/proceed-with-caution-the-trouble... (and the underlying studies), which mainly address the question of whether reading a trigger warning and then consuming the potentially triggering content is better than just consuming the potentially triggering content without a warning.
(The article also mentions a finding that trigger warnings have "no meaningful effect on an individual's [...] avoidance of this content"; but I think that's entirely compatible with a world where most people consume the content regardless of the warning, some are more drawn to it because of the warning, and some (including the few who are truly vulnerable) avoid it because of the warning. The effect on those vulnerable few is what's most relevant here. The article does briefly mention "unhealthy avoidance behaviours", but in the context of one university's opinion and without supporting evidence.)
What's the best evidence against trigger warnings as a means of enabling traumatised people to make an informed decision on when (and whether) to confront their triggers?
> The article does briefly mention "unhealthy avoidance behaviours", but in the context of one university's opinion and without supporting evidence.)
There's not much additional context here because avoidant behavior is basically universally understood to be a bad thing when it comes to the long term treatment of PTSD (this is separate from immediately/short-term after the event - different situation there) - there's no real serious argument against this idea, so when avoidant behavior is discussed it doesn't require context on why that behavior is a bad thing, in the same way that a an article targeted at cardiologists isn't going to explain why poor ejection fraction is an issue - it's baseline knowledge for the target audience.
To be clear, I'm not definitively stating it causes avoidant behavior - I am saying that it might, which would be one of those 'worst case' scenarios.
Trauma groups have been part of the meta-analysis that indicate no real change in avoidance, and some have had the 'forbidden fruit' impact even in trauma groups, but it's in similar quantities as the ones that show an increase in avoidant behavior.
Fundamentally, trigger warnings just don't make a lot of sense to try and argue in favor of from a 'helping people with their PTSD' standpoint if you believe the science.
1) For them to have the effect you claim is desirable, they would need to avoid the content - but avoidant behavior is a negative when it comes to overcoming PTSD
2) The science largely indicates that it doesn't cause them to change their behavior at all in this manner - so the desired effect, it doesn't seem to do anything.
3) There's some evidence that it might increase avoidant behavior (science would call this bad!) and some evidence it might increase people exposure due to the 'forbidden fruit' effect (which would be bad from the supposed desired effect, and not necessarily good from the scientific standpoint - unnaturally being pushed towards something might also be negative vs. more 'natural' exposure, particularly when coupled with the upcoming point)
4) A variety of studies have shown that they increase anticipatory anxiety in people when they appear, which is of course a negative for anyone. I haven't been able to find any studies particularly engaging on this specific topic of anticipatory anxiety from trigger warnings + follow up exposure from the 'forbidden fruit' effect so this isn't something backed by science like the rest, but my gut instinct is that it would be more likely to be negative vs. something more organic. I could very well be wrong there.
I don't see any combination of piecing together these studies that could lead to a belief that trigger warnings provide value from a therapeutic standpoint.
Can you point me to some strong evidence that it's reliably counterproductive to avoid reading a book or watching a show that contains a trigger? I get that avoidance, in the sense of trying to push away all thoughts of the trauma and avoid all possible reminders, is generally considered counterproductive. And exposure, at the right times and in the right ways, can be very helpful (or absolutely necessary). But there's a big difference between those facts and the idea that it's bad for a PTSD sufferer to have the option of sometimes deciding not to actively expose themselves to triggering media.
> The avoidance cluster of PTSD symptoms involves efforts to avoid distressing memories, thoughts, or feelings, and external reminders like discussions about the traumatic event or encounters with people or places associated with it.
I don't see how specifically avoiding content that contains triggers is anything but avoidance behavior as discussed above - avoiding the news or discussions about war is pretty explicitly facilitated by TW - before the clip plays on the news, by people posting it at the top of their social media content, etc. And media with the content would fall in line pretty explicitly as an "external reminder"
Like, I don't think someone who has been physically tortured and dealing with PTSD should watch Hostel or other torture porn, and I don't think a vet with PTSD should watch a compilation video of some of the worst horrors of war. So I'm not arguing for massive exposure or intentional forced exposure, etc. But the fundamental issue is that going out of your way to prevent yourself from being exposed to it at all, which is what TW facilitate if they were to work, is pretty definitionally avoidant behavior.
> At best they don't seem to actually do anything, and at worst, they actively impede your ability to get better.
No, trigger warnings do not actively impede your ability to get better. That argument rests on random trigger being framed as "exposure therapy like" event. The exposure therapy is not done by random unprepared exposure to the triggering material with no follow up. Nor by random exposure in public setting.
Some also showed no evidence of this, but avoidant behavior is pretty much universally considered to be a specific maladaptive behavior when it comes to treating PTSD in the long run. It has nothing to do with the idea that it is the same as exposure therapy.
Teaching people to not let emotions get to them, and offending them to build up that immunity, used to be a normal part of life. I wonder what happened.
People gained more exposure to eachother and realized it was kind to warn eachother of things that might bother them a lot.
There’s quite a difference between the popularized image of what trigger warnings are and the common sense use-cases like “this media contains depictions of graphic sexual assault that some viewers may find disturbing”.
trigger warnings are not there to prevent people from being "offended" or to avoid emotions they may "get to them." trigger warnings are so folks who have experienced traumatic events can avoid having a panic response triggered unexpectedly.
traumatic events are not a normal part of life and fortunately most people are never forced to experience something truly traumatic. Uncontrolled exposure does not build up "immunity" or help individuals work through or process the trauma. if the warnings seem unnecessary to you, then they're probably not for you.
Trigger warnings have been quite heavily researched at this point and at best they seem to have no positive impact to overcoming traumatic events and a some of the studies have shown them to be a negative.
Put 'scientific support for trigger warnings' in your favorite search engine and you'll find meta-analysis, RCTs, other types of studies, reviews, as well as discussions from the APS, other psychology and psychiatry related publications, etc.
This isn't to say removing trigger warnings is a replacement for actual guided therapy, exposure therapy or otherwise, but it doesn't seem like it would be a negative outcome for long term mental health and would be a benefit for anticipatory distress and potentially in combating avoidant behaviors (though not all studies universally found them to increase avoidant behaviors - just some)
This is a separate question than when it comes to general polite society and social expectations and what is and isn't considered a courtesy. The studies also aren't dealing with people that have just gone through the traumatic experience, so you could make a reasonable argument that exposure to something still fresh could have a very different impact.
The purpose is not to help people overcome traumatic events. The purpose is to be kind to people. "Hey you are going to have a shitty day but it'll help you deal with your trauma" is not something that a professor should be unilaterally deciding.
But is there evidence that trigger warnings in classrooms make overcoming trauma more difficult? The cited research just says it doesn't help people overcome trauma.
All those papers look at the difference between "consuming content without being given a trigger warning" and "consuming content after being given a trigger warning."
There has been no proper research on the effectiveness of "being given a trigger warning, and then not consuming the content because of it." Which seems to be the most important factor to consider when it's about avoid sudden panic responses.
> There has been no proper research on the effectiveness of "being given a trigger warning, and then not consuming the content because of it."
Well, there has been. From multiple angles. One, avoiding content because it might trigger you is just... avoidant behavior. Which is pretty much universally considered a bad thing. There's a big difference from seeking out exposure because you want to do your own exposure therapy (bad thing) and just letting yourself be exposed to things in a more organic fashion (good thing).
Two, most research indicates that TW do not actually reduce the consumption of content. Not all of the studies are on "did they help people process content they watched," as a lot of them are "did the TW make people not watch the content to begin with." Mostly it seems to haven no impact. A smaller subset of studies showed effects in other directions - both reduction and increase of content viewing after TW. If they reduce viewing I'd argue this is bad because it's avoidant behavior, and I suspect that the 'forbidden fruit' effect is also not positive because it's now giving you pre-viewing anxiety and is no longer the more organic 'let exposure happen naturally, don't just stop watching the news because it might contain stories about war.'
I mean ... when exactly? Dueling was all about "you said a thing that makes me uncomfortable, so I have to at least pretend to want to kill you". Domestic violence used to be defended with variants of "he felt bad". When you look at history, people overreacted in all kinds of ways at all kind of small impulses. The only difference is that the impulses were slightly different.
I think they're arguing that it should not be up to the companies being nice. Yeah, Epic's layoffs were nice, but a lot of companies give shit or no severance at all.
I've been laid off and I only get paid until the end of the week, and for healthcare the only thing I have access to is overpriced COBRA.
Well I have a job now so it's not as big of a deal. This was awhile ago.
I live in NYC, and when I was laid off from a job in 2023. I looked into the COBRA options, and they wanted something like $3500/month, which is a lot of money. I called around around and I was eventually able to do a program through NYC where we got insurance for free. It actually worked great; we were able to get insurance within a week. NYC ain't perfect but every now and then they come through.
If I get laid off or fired, I will likely check this option again.
Often when I see layoffs like this, I can't help but think "Wow that company has so many employees and yet, in practice, does so little". This perhaps a rather uncharitable sentiment, but I can't help but have it.
Yes, Unreal Engine keeps getting improved, more Fortnite content gets produced. But there is a general lack of innovation, one that I find personally painful when I look at Epic's recent-ish track record. Needing to fire this many employees is not just a result of market conditions, but also a straightforward consequence of not being able to leverage them for sufficiently lucrative outcomes.
Companies with this amount of capital are well positioned to take multiple strategic bets which aren't at all safe bets, but pose no real financial risk for the company in aggregate. Why do these bets end up being taken instead by indies with much more to lose? Well, partly because indies often _need_ to take riskier bets to carve a niche. But the other side of the coin is, what I can only surmise, a lack of imagination and adventurousness on the part of management. They could be funding many experiments and seek to have another hit like Fortnite, perhaps in a somewhat different market. Having to seek another hit while your finances are declining is less pleasant.
When a company loses its edge in this way, as long as it hasn't _really_ captured a demographic or created some very sticky ecosystem, it's bound to get whittled down repeatedly. I doubt that Epic will suddenly get more creative and adventurous at this point, but perhaps necessity will have its part to play.
(Aside from all of that, I agree with most commenters here that the layoff is being handled about as gracefully as one could reasonably hope.)
> "Wow that company has so many employees and yet, in practice, does so little"
Some of it is real need for things like support, payments, and compliance in a bunch of languages and jurisdictions and across a bunch of platforms and combinations of platforms.
A lot of it's just that large businesses tend to be shockingly inefficient, often taking literally many hundreds of person-hours to do things that a small company or small team might do in low-tens. Coordination costs are high, processes are often really bad in ways that nobody who could fix them is empowered to, serious principal-agent problems are the norm rather than the exception, et c.
One of the weirdest things to me about the AI craze is that I don't see how it fixes organizational problems, and most big orgs are already burning more cash on waste due to those than they could possibly gain from fairly-optimistic LLM gains. Like, if they wanted to 5x development speed, they already can without a single LLM involved, by managing better. They could have done that ten years ago. All the more wild that they're flipping out over LLMs. You can't even come close to efficiently organizing the resources you already have...
> if they wanted to 5x development speed, they already can without a single LLM involved, by managing better.
True, but leaders of large organizations always want to fix inefficiencies and presumably failing to. Kinda like saying "if humans stopped fighting wars, most of them would have better quality of life" -- people whose life quality is better at peacetime are already trying to avoid wars, and there's not much more they can do.
OTOH, AI is a practical step a CTO (or CEO or Board or whoever) can take to make the company more efficient (assuming the hype works out).
TLDR the beauracracy is by design, in part to preserve what jobs are there, and in part to dilllute accountability when it comes up. You can see how these two factors can lead to a negative feedback loop of inefficiencies, where CYA is more important than actual productivity.
Nanite is a good counterexample, very impressive and innovative technology. Even more impressive that they released the technical details instead of going the software patent route. I think trying to leverage the war chest to go after Steam's monopoly was exactly the type of adventurous plan you are talking about, the safe play would have been to make minimal investments and continue churning out games hoping for another big hit.
Lumen, Nanite, Substrate, Metahumans, and Chaos Physics all come to mind as major innovations driven by Epic. Nanite specifically has pretty much no alternatives. Additionally, the developer UX on UE5 gets better from release to release - for instance, being able to recompile your logic or animations while actively running the game is pretty insane.
I suspect the way you get to that size is having a team of 5 people doing X for Fortnite going "man, we could do our specific job a bit better if we has 7 people." Scale that to a whole corp.
Each job is justified in isolation to do a specific thing, at least at the hiring time. I suspect there aren't a lot of people thinking at a high level as you are "we have this many gajillion dollars - what are we betting on?"
Also as you get bigger you need more and more "glue" people, HR, accounting, etc. A five-man team can avoid all that, but you can't take them to 25 and have 5 5-people teams anymore. Some amount is just management.
> They could be funding many experiments and seek to have another hit like Fortnite
Remember that their success came from abandoning their original zombie game idea, and copying ideas from the new battle royale genre. With more polish, of course.
Why the good god do so many people care if business is effcient. If businesses were maximally efficient we'd all be starving to death or working at red line hanging on by a thread with no slack at a job to avoid staving to death, while like 5000 people live like gods.
1. More efficiencies makes for more quality products, and opens up opportunities for more initiatives which will also be higher quality than usual.
2. More efficiencies means companies become super leannin order to minimize labor and maximize costs, focusing on a few select profit centers.
I guess you fall under #2. Both are correct, depending on the economy. So you're rightm for now. #1 was the trend last decade despite horrible inefficiencies.
I don't think so. My experience at inefficient companies is not "Wow I have a lot of freetime." It's "Wow my day is completely taken up by useless bullshit that prevented me from getting anything of value done".
I think there's a good deal of wiggle room between being worked to death and teaching a manager that makes more than you how Jira works for the 10th time.
By accepting that valuing the life of a cop over a criminal is a good reason for American cops to behave the way they do, what you end up encouraging instead is cops valuing their own lives over those of citizens to a shocking extent.
One issue is that security theater creates demand for itself. Do things that induce worry and a tendency towards paranoia in the more susceptible parts of the population and then you will gradually raise the general alertness of the population. This then manufactures a desire for these measures. It largely rides off of people's general unwillingness to entertain just how many of the measures are ineffective or nonsensical. "It can't all be pointless. Surely some of it must make us safer." It's not an unreasonable belief in itself, but everyone having that attitude lets security theater grow cancerously.
Agreed, and not just engineering quality. The company I've been with for 8 years is on its last legs and massive overhiring has had a huge hand in it. It cost a tremendous amount of money and made management more difficult, reducing the quality of the decisionmaking in many areas. It made its strategy deviate further from doing the things that would have made the company sustainable. It also devastated employee morale with the subsequent repeated waves of layoffs.
Maybe this kind of isolation neuters the benefit you're thinking of, but I do believe some sort of solution could be reached.
reply