I’ve recently had to deal with my father cognitive decline & falling for scams left & right using Meta’s apps. This has been so hard on our family. I did a search the other day on marketplace and 100% of all sellers were scams, 20-30 of them.
Meta is a cancer on our society, I’m shutting down all my accounts. Back when TV/Radio/News paper were how you consumed news, you couldn’t get scams this bad at this scale. Our parents dealt with their parents so much easier as they cognitively declined. We need legal protections for elders and youth online more than ever. Companies need to be liable for their ads and scam accounts. Then you’d see a better internet.
My grandmother has been through the same thing. She was scammed out of all of her savings by accounts impersonating a particular celebrity. Thankfully the bank returned all of the money, but the perpetrators will never be caught, they operate out of Nigeria (one of them attached their phone to her Google account.)
Unfortunately these fake celebrity accounts are swarming her like locusts again. We tried to educate her about not using her real name online, not giving out information or adding unknown people as friends, but there's a very sad possibility that she doesn't fully understand what she's doing.
It was emotionally difficult going through her laptop to gather evidence for the bank. They know exactly how to romance and pull on heart strings, particularly with elderly people.
Meta's platforms are a hive of scammers and they should be held accountable.
The number of my outer circle of friends who fall for the “copied profile” adding of unknown people or accept a friend request from the attractive young woman who somehow is interested in them is shocking. (I’m gauging this from looking at the “mutual friends” in the friend request.)
I don’t think it’s a silent crisis per se, but just one people ignore.
There’s tons of media about it, tons of people are aware of elder fraud etc but people don’t want to think about the vulnerable of society. There’s been jokes about it and media about it going back decades.
People are aware but solving it requires an uncomfortable level of change in society, training and regulations.
As an aside, both Thelma and The Beekeeper are recent movies about elders being scammed and revenge being taken. Both very different but enjoyable.
People survived with quite severe dementia hundreds of years ago. It doesn’t necessarily imply the rest of the body is unhealthy just their brain in a very specific way.
My dad had fallen for two scams - one through WhatsApp, the other texts.
I’m not sure how much we can blame individual companies for this. Obviously they should be doing more - shutting down accounts that message people at random, for instance, but I feel like the scammers will find a way.
I also don’t know what else we can do. It should be easier for kids (or anyone else) to shut down their parent’s accounts at least once this happens, stop all wire and crypto transfers, etc.
Unfortunately I have a similar experience. If someone's working at Meta right now, and has been in the past 10 years, they're willingly and actively contributing to making society worse. Some open-source tech is not going to undo any of this, nor any of the past transgressions. I get the pay is probably great, but have some decency.
I suggested a hiring ban on anyone who ever worked at Meta some years back. It was not met with open arms. Going to try again here...
I think it's a valid suggestion that might result in people rethinking working for Meta if it was taken seriously.
Working for Meta is ethically questionable. The company does unspeakable damage to our country. It harms our kids, our elders, our political stability. Working for it, and a number of similar companies, is contributing to the breakdown of the fabric of our society.
Why not build a list of Meta employees and tell them they're not eligible for being hired unless they show some kind of remorse or restitution?
It could be an aggregation of LinkedIn profiles and would call attention to the quandary of hiring someone with questionable ethics to work at your organization. It might go viral on the audacity of the idea alone. That might cause some panic and some pause amongst prospective Meta hires and interns. They might rethink their career choices.
My litmus test is, do you think that the person managing Meta’s coffee supply is ethically questionable? If you met them, would you tell them that they need to quit, and would you consider them a bad person if they don’t? There are organizations that meet that bar, but I really don’t think Meta is one of them.
One must also check what YouTube recommends their elderly parents, because it is easy for them to slide into getting recommended harmful content, mostly things like psychological, religious or alternative-medicine topics. Note that not all of them are harmful, but most of them are published by very odd channels.
Opening YouTube on a new machine / OS / browser / without login is eye opening in terms of the awful stuff that gets recommended by default and how quickly it tilts worse if you watch any of it.
This, so much! It's outright disgusting. I have no idea why we tolerate this as a society. I fear it is because this diagnosis isn't widely known, it's happening on the fringes.
Everybody, including journalists and tech people, is moving about their own algorithmic bubble nearly all the time. They just can't imagine how bad the situation has become out there. We're turning a blind eye to the very thing that is destroying our societies.
I think that any of these algorithmic feeds, by any company, should be held as if the companies have vetted the content and it is theirs. And the culpability that goes with that.
youtube also has kitboga, pirogi, deeveeaar, etc which are very helpful. i introduced my mother, who has early dementia and can't do much, so watches a lot of netflix and youtube, to kitboga and she loved it and found other scambaiters. i'm stoked. i know she will tell a scammer to f off now.
So many of us have been there - it is brutal. These platforms are ripping us apart from each other, providing criminals easy access to the most vulnerable, and concentrating wealth to an unimaginable degree.
One third of all scams in the US are operated on Meta platforms.
They have a policy that if a scammer’s ad spend makes up more than 0.15% of Meta revenue, moderators must protect the scammer instead of blocking it.
Meta is working hard to scam your dad for ad spend. It’s hugely profitable for them and they are helping to grow it per internal policy. They are only interested in fostering big-time scammers.
I would like to understand the downvotes: is it from doubting these facts? If so, I will post the sources (which were recent mainstream news on the front page of HN). Or is it because of the negative sentiment about Meta? Or disagreement that Meta has any responsibility over moderating scams they promote?
These are verified facts that make up the substance of my message:
- Meta protects their biggest scammers, per internal policy from leadership
- Meta makes a huge profit from these scammers (10% of total revenue; or in other words, their scam revenue is approximately 5x larger than the total Oculus revenue)
- The scams that Meta promotes represent one-third of the total online scams in the US
> One third of all scams in the US are operated on Meta platforms.
And 100% of all internet scam traffic in the US goes through either US ISPs or US cell carriers.
Should those entities be held liable instead? Or maybe, Meta instead should scan users' private messages on their platforms and report everything that might seem problematic (whatever the current US administration in power considers as problematic) to the relevant authorities?
My personal take: there should be more effort in going after the actual scammers, as opposed to going after the "data pipes" (of various abstraction levels) like Meta/ISPs/cell carriers/etc.
Our own attempts to do something about (successful) scammers were meant with utter indifference by my parent's state's (Arizona) attorney general, county sheriffs, local police.
At this point, I think all of the big tech companies have had some accusations of them acting unethically, but usually, the accusations are around them acting anticompetitively or issues around privacy.
Meta (and social media more broadly) are the only case where we have (in my opinion) substantiated allegations of a company being aware of a large, negative impact on society (mental wellness, of teens no less), and still prioritizing growth and profit. The mix is usually: grow at all costs mindset, being "data-driven", optimizing for engagement/addiction, and monetizing via ads. The center of gravity of this has all been Meta (and social media), but that thinking has permeated lots of other tech as well.
It's a well worn playbook by now. But Meta seems to be the only one where we now have proof of internal research being scuttled for showing the inconvenient truth.
True, but there haven't even been publicly known internal research attempts at for example YouTube/Google about the content they are pushing and probably more importantly the ads they keep pushing into people's faces. I bet FB/Meta are kicking themselves now, for even running such internal research in the first place.
My point is, that all of these big tech giants will find, that they are a harmful cancer to society, at least in parts. Which is probably why they don't even "research" it. This way they can continue to act oblivious to this fact.
> I bet FB/Meta are kicking themselves now, for even running such internal research in the first place.
100%. This is what people miss in this thread when they're talking about seeing to punish companies who knowingly harm society. All that is going to do is discourage companies from ever seeking to evaluate the effects that they're having.
The tobacco industry also did that, but in many ways it also seems different, because where tobacco was something that has existed for millennia and was a scourge introduced to the world by the tribes of the “new world”; Facebook was a primary player in creating the whole social media space, something that effectively did not exist in the predatory and malignant manner that it was used for to create a digital panopticon, or more accurately and way worse, where your participation is required for a certain kind of success.
Social media is abusive and utterly psychotic and narcissistic, because that is the type of people who created it using basic psychological abuse and submission tactics. Banks, casinos, games, hollywood/TV, news/politics, social media, contemporary academia and religion, etc.; they all function on being endorphin dealers/dispensers.
What do you think the social effects of large scale advertising are? The whole point is to create false demand essentially driving discontent. I've no idea if Google et al have ever done a formal internal study on the consequences, but it's not hard to predict what the result would be.
The internet can provide an immense amount of good for society, but if we net it on overall impact, I suspect that the internet has overall had a severely negative impact on society. And this effect is being magnified by companies who certainly know that what they're doing is socially detrimental, but they're making tons of money doing it.
I agree false demand effects exist. But sometimes ads tell you about products which genuinely improve your life. Or just tell you "this company is willing to spend a lot on ads, they're not just a fly-by-night operation".
One hypothesis for why Africa is underdeveloped is they have too many inefficient mom-and-pop businesses selling uneven-quality products, and not enough major brands working to build strong reputations and exploit economies of scale.
> But sometimes ads tell you about products which genuinely improve your life.
I’d argue that life improvement is so small it’s not worth the damage of false demand. I can maybe think of one product that I saw a random ad for that I actually still use today. I’d say >90% of products being advertised these days are pointless garbage or actually net negative.
Advertising is cancer for the mind and our society severely underestimates the harm it’s done.
The positive benefits in education, science research and logistics are hard to understate. Mass advertising existed before the internet. Can you be more explicit about which downsides you thibk the additional mass advertising on the internet caused that can come anywhere close to the immeasurable benefits provided by the internet?
I'm somewhat unsurprised that my off the cuff hypothesis has been tested, and is indeed likely accurate. [1] Advertising literally makes people dissatisfied with their lives. And it's extremely easy to see the causal relationship for why this is. Companies like Google are certainly 100% aware of this. And saying that advertising existed before the internet is somewhat flippant. Obviously it did but the scale has increased so dramatically much that it's reaching the point of absurdity.
And a practical point on this topic is that the benefits of the internet are, in practice, fringe, even if freely available to everyone. For instance now there are free classes from most of all top universities online, on just about every topic, that people can enroll and participate in. There are literally 0 barriers to receiving a free premium quality education. Yet the number of people that participate in this is negligible and overwhelmingly composed of people that would have had no less success even prior to the internet.
By contrast the negatives are extremely widespread on both an individual and social level. As my post count should demonstrate, I love the internet. And obviously this site is just one small segment of all the things I do on the internet. In fact my current living would be impossible without it. Yet if I had the choice of pushing a button that would send humanity on a trajectory where we sidestep (or move along from) the internet, I wouldn't hesitate in the slightest to push it.
PG/VG base is exactly the same stuff that has been used in foggers/hazers for decades. If there were negative health effects associated with the stuff, we'd have spotted it long ago. As for nicotine, well, it's the same stuff as in cigarettes, we know about its effects again thanks to decades of research.
The only thing left is questionable flavoring agents and dodgy shops with THC oil vapes (although that kind of contamination is now known and it's been ages since I last heard anything).
>PG/VG base is exactly the same stuff that has been used in foggers/hazers for decades. If there were negative health effects associated with the stuff, we'd have spotted it long ago.
How many people are directly exposed to it daily? Technicians and performers are probably it. Everyone else is very rare so it's possibly any side effects took a while for medical community to pick up on until everyone started vaping.
>At large, vapes are better than cigarettes.
Better yes, they are harm reduction over cigarettes. However, it's not "good" and should be as regulated as cigarettes are.
> There is one study looking at the potential to use PG as a carrier for an inhaled medicine (https://www.ncbi.nlm.nih.gov/pubmed/18158714) and another which mentions that PG or ethanol may be used as a cosolvent (https://www.ncbi.nlm.nih.gov/pubmed/12425745) in nebulizers, but no evidence presented of an asthma inhaler or nebulizer that is actually used today containing PG.
Even then, there's a huge difference between "being on stage with a fog machine", and 3-4 puffs a day of a smaller amount of a nebulizer, than chronic hundreds of puffs a day with vapes.
> Meta are the only case where we have substantiated allegations of a company being aware of a large, negative impact on society
Robinhood has entered the chat
Why would one specific industry be better? The toxic people will migrate to that industry and profit at the expense of society. It’s market efficiency at work.
I do think an industry is often shaped by the early leaders or group of people around them. Those people shape the dominant company in that space, and then go off to spread that culture in other companies that they start or join. And, competitors are often looking to the dominant company and trying to emulate that company.
> Early OpenAI set the tone of safe, open-source AI.
Early OpenAI told a bunch of lies that even (some of) their most-ardent fans are now seeing through. They didn't start off good and become the villain.
Gamifying day trading is just turning the retail market into gambling. Obvious objections will be that this has been possible for a long time now. But never did I know young men to casually play the market day to day like Wall Street Bets do now the way they would follow sports in the past.
Exploring unsophisticated investors. Trading on margin used to be for extremely experienced and educated people working for a large financial institution. The risk of margin trading is extreme with unlimited losses.
Also, tobacco companies and oil companies famously got into trouble from revelations that they were perfectly aware of their negative impacts. For the gambling and alcohol industry, it probably wouldn't even make the news if some internal report leaked that they were "aware" of their negative impact (as if anyone thought they would not be?)
Social media is way down on the list of companies aware of their negative impact. The negative impact arguably isn't even central to their business model, which it certainly is for the other industries mentioned.
The leaders and one of the announcers of Radio Télévision Libre des Mille Collines got 30 years to life sentences for their part in the Rwandan genocide.
> Meta (and social media more broadly) are the only case where we have (in my opinion) substantiated allegations of a company being aware of a large, negative impact on society (mental wellness, of teens no less), and still prioritizing growth and profit
Them doing nothing about hate speech that fanned the flames for a full blown genocide is pretty terrible too. They knew the risks, were warned, yet still didn't do anything. It would be unfair to say the Rohingya genocide is the fault of Meta, but they definitely contributed way too much.
We all know this. As people in the tech industry. As people on this website. We know this. The question is, what are we going to do about it? We spend enough time complaining or saying "I'm going to quit facebook" but there's Instagram and Threads and whatever else. And you alone quitting isn't enough. We have to help the people really suffering. We can sometimes equate social media to cigarettes or alcohol and relate the addictive parts of that but we have to acknowledge tools for communication and community are useful, if not even vital in this day and age. We have to find a way to separate the good from the bad and actively create alternatives. It does not mean you create a better cigarette or ban alcohol for minors. It means you use things for their intended purpose.
We can strip systems like X, Instagram, Facebook, Youtube, TikTok, etc of their addictive parts and get back to utility to value. We can have systems not owned by US corporations that are fundamentally valuable to society. But it requires us, the tech savvy engineering folk to make those leaps. Because the rest of society can't do it. We are in the position of power. We have the ability.
At the moment the biggest hope I have is there’s client side tech that protects us from these dark patterns. But I suspect they’ll have their own dark patterns to make them profitable.
I guess we can speculate or theorise on potential strategies but beyond hope we should also try to do something. I have seen some X clones with variations but a lot of the same behaviour plays out when you have no rules around posting, moderation, types of content, etc. Effectively these platforms end up in the same place of gamification and driving engagement through addictive behaviours because they want users. Essentially I think true community is different, true community keeps each other accountable and in check. Somehow we need to get back to some of that. Maybe co-operative led tools. Non profits. I think Mastodon meant well and didn't end up in the right place. Element/Matrix is OK but again doesn't feel quite right. Maybe we should never try to replicate what was, I don't know. BitChat (https://bitchat.free/) is an interesting alternative from Jack Dorsey - who I think is trying to fix the loss of Twitter and the stronghold of WhatsApp.
> Companies can't really be expected to police themselves.
Not so long as we don't punish them for failure to. We need a corporate death penalty for an organization that, say, knowingly conspires to destroy the planet's habitability. Then the bean counters might calculate the risk of doing so as unacceptable. We're so ready and willing to punish individuals for harm they do to other individuals, but if you get together in a group then suddenly you can plot the downfall of civilization and get a light fine and carry on.
Corporate death penalty as in terminate the corporation?
Why not the actual death penalty? Or put another way, why not sanctions on the individuals these entities are made up of? It strikes me that qualified immunity for police/government officials and the protections of hiding behind incorporation serve the same purpose - little to no individual accountability when these entities do wrong. Piercing the corporate veil and pursuing a loss of qualified immunity are both difficult - in some cases, often impossible - to accomplish in court, thus incentivizing bad behavior for individuals with those protections.
Maybe a reform of those ideas or protocols would be useful and address the tension you highlight between how we treat "individuals" vs individuals acting in the name of particular entities.
As an aside, both protections have interesting nuances and commonalities. I believe they also highlight another tension (on the flip-side of punishment) between the ability of regular people to hold individuals at these entities accountable in civil suits vs the government maintaining a monopoly on going after individuals. This monopoly can easily lead to corruption (obvious in the qualified immunity case, less obvious but still blatant in the corporate case, where these entities and their officers give politicians and prosecutors millions and millions of dollars).
As George Carlin said, it's a big club. And you ain't in it.
In my conception, part of the corporate death penalty would be personal asset forfeitures and prison time for individuals who knew or should have known about the malfeasance.
This is what China does. The problem is that the application is a little, uh, selective. As soon as you get any kind of corruption it becomes a power play between different factions in the elites.
You can't do any of this without a strong, independent, judiciary, strongly resistant to corruption. Making that happen is harder than it sounds.
And it still won't help, because the perps are sociopaths and they can't process consequences. So it's not a deterrent.
The only effective way to deal with this is to bar certain personality types from positions of power.
You might think that sounds outrageous, but we effectively have that today, only in reverse. People with strong moral codes are actively excluded from senior management.
It's a covert farming process that excludes those who would use corporate power constructively rather than abusing it for short-term gain.
In these cases, what is prison time going to accomplish that a severe enough monetary remedy would not? Putting someone in a prison cell is a state power (criminal remedy). I think that is a useful distinction generally, and a power that should be employed only when legitimized through some government process which has a very high bar (beyond a reasonable doubt, criminal rules of evidence, protections against self incrimination etc), as it deprives someone of their physical liberty.
It strikes me that if you also appreciate this distinction, then your remedy to corporations that have too much power is to give the government even more power?
Personally, I would like to see more creative solutions that weaken both government and corporations and empower individuals to hold either accountable. I think the current gap between individuals and the other two is too severe, I'm not sure how making the government even more powerful actually helps the individual. Do you want the current American government to be more powerful? Would your answer have been different last year?
I do not see any equivalence between corporate power and government power. The population as a whole controls government power. Corporate power is constrained only by government power. I think one of the most pernicious notions in our society is that the idea that "the government" is something separate from ordinary people.
Of course, our current government has a lot of problems, but that doesn't mean I don't want the government to have power. I just want it to have power to do what the population actually wants it to do (or, perhaps, what the population will actually be happiest with).
What would be your proposed mechanism for empowering individuals? How would such a mechanism not ultimately rely on the individual leveraging some larger external power structure (like a government)? I think if we want to empower all individuals roughly equally (i.e., not in proportion to their wealth or the like), then what we wind up with is something I'd call a government. Definitely not the one we have, but government nonetheless.
It's a fair rejoinder, except I think it mixes idealism about government for realism. In reality, the government becomes an entity unto itself. This is a universal problem of government. Democratic institutions are themselves supposed to be a check on this impulse. However, as you are aware these are not absolute. A check that foresees a need to restrain government also sees a need to empower the government to restrain people.
I think however when we acknowledge that men are not angels, and that therefore government itself is dangerous merely as a centralization of power, then no, you cannot simply say well government is supposed to be of a different type of power than corporations. Because again, in reality this is often not the case. This is why several of the American founders and many of those who fought in that revolution also became anti federalists or argued against constitutional ratification.
I don't know what the answer is, but I don't think there has ever been a situation where it is accurate to say the population as a whole controls the government. In practice it doesn't work that way, and is about as useful as saying well the market controls corporations. I think something more like anti federalism could use a renaissance... the government should be weak in more cases. Individuals should be empowered. A government power to hold a corporation accountable could then rest on simply its strict duty to enforce a civil remedy. That is of a different nature than the government deciding on its own who (and more importantly - who not) to prosecute.
But I appreciate your push back, there are indeed no easy answers.
Bullshit. I have no control whatsoever over the government. It is completely separate from me. I have 1000x more power over Amazon by my ability to choose to not buy from them than my vote gives me over government bureaucracy. That's why whenever I have a problem with an Amazon order it is resolved in minutes when I contact support. Good luck if you have a problem with the government.
Amazon are not resolving your issue in minutes because you have power over them. They do it because it is efficient and profitible for them to keep customers happy. Your actual influence over a trillion dollar company is tiny compaired to your influence as a voter.
One customer taking there business elsewhere does not affect Amazon in any meaningful way. One vote is counted directly. The gap is between how it feels and how the power actually works. This of course assumes you live in a democratic country.
My view is that the corporate death penalty is either dissolution or nationalization, whichever is less disruptive. If you make your company "too big to fail" without hurting loads of people, then use it to hurt people, then the people get your company. If it's a smaller operation it can just go poof. The priority should be ensuring the bad behavior is stopped, then that harm is rectified, and finally that an example be made to anyone else with a clever new way to externalize harm as a business model.
Sounds like a very extreme remedy. Not sure you want whatever government is elected every four years to have this power. Doesn't address the concern re regulatory capture, could lead to worse government incentives. Why not focus on allowing regular people to more realistically hold corporations and their owners/officers liable in civil courts? It's already hard enough given the imbalance of funds, access and power... but often legal doctrine makes the bar to clear impossible at the outset.
I would posit that we are in the current political situation precisely because we do not hold the capital class accountable. Do you sincerely believe that investors losing their investment is a “very extreme” response to gross corporate lawbreaking on their behalf?
We are in this situation because we elect people who do not hold the capital class accountable. Look at the people we elect. How would them running companies be any better?
The capital class chooses and presents the people you can vote for. They decide what issues are talked about in the media, they decide who gets the most funding, and they probably have ways of getting rid of or corrupt the people who somehow get popular without first being accepted by at least some people from the capital class.
We are in the situation because the capital class have turned the people we elect into servile puppets. Because they have simply been allowed to become too big and powerful.
They aren't servile puppets because they are children, they are servile puppets because that's what they are paid (and threatened, via financing their more pliable opponents) to do.
Why not make the civil case path easier then? The extreme nature of your remedy is the idea of a government taking over and owning a corporation. That creates bad incentives. I think if individuals could reasonably expect to be able to knock people like Mark Zuckerberg out of the billionaire class in a civil suit, then yes, he and the types of people he represents would behave better. Having the government run Facebook or Enron or Google or whatever both sounds less desirable than empowering individuals and weakening corporate protections in civil cases, and frankly; worse than the prevailing situation re the "capital class". If you think the current political situation is bad the last thing you should want is more government power.
Except drug dealers do not sell you fentanyl just so you can get high because they do not care. They do not care about YOUR OWN intention. People demand, they supply. And these people can have legitimate reasons.
What would they fear about it? Nationalisation would include compensation (as per relevant laws), so the shareholders don't lose a lot. Maybe the compensation would be less than the potential highs of the stock price, but it's not like they entirely lose out
The actual death penalty is not a good idea for several reasons, including possibility of error (even if that possibility is small).
(In the case of a corporation, also many people might be involved, some of whom might not know what it is, therefore increasing the possibility of error.)
However, terminating the corporation might help (combined with fines if they had earned any profit from it so far), if there is not an effective and practical lesser punishment which would prevent this harm.
However, your other ideas seem to be valid points; one thing that you mention is, government monopoly can (and does) lead to corruption (although not only this specific kind).
Problem remains: What do we do, if others don't care and violently start killing our group? Do we reward them, throwing away all our weapons and making them our new government?
This question of course currently has a very real real world parallel.
Just a few days ago, someone replied to one of my comments saying that considering the lives of people who aren't born yet is a completely immoral thing to do, meaning making anyone alive today sacrifice something to protect the planet in 100 years is immoral. So I guess people can find all sorts of justifications.
Of course that is wrong and it is not immoral; but, if you want to do it in the moral way, you have to consider the lives of any living things (plants and animals), including but not limited to humans. Furthermore, there is the consideration of what exactly has to be sacrificed and what kind of coercion is being used (which might be immoral for a different reason); morals is not as simple like they would say.
But, yes people do find all sorts of justifications, whether or not they are any good (although sometimes it is not immediately clear if it is any good, unfortunately).
Prime example: animal agriculture. By far the biggest driver of biodiversity loss and nature destruction. Yet people justify it constantly with trivial things like taste, convinience, tradition, etc.
> We're so ready and willing to punish individuals for harm they do to other individuals, but if you get together in a group then suddenly you can plot the downfall of civilization and get a light fine and carry on.
Surely "plot the downfall of civilization" is an exaggeration. Knowing that certain actions have harmful consequences to the environment or the humanity, and nevertheless persisting in them, is what many individuals lawfully do without getting together.
The group of pretty much all humans is such a group because we all conspire to burn fossil fuels. Do you really think a global civilization death penalty is a good idea? That's throwing out the baby with the bathwater.
The problem is that our current ideology basically assumes they will be - either by consumer pressure, or by competition. The fact that they don't police themselves is then held as proof that what they did is either wanted by consumers or is competitive.
Maybe more parallels to tobacco companies. Incredible amount of taxes and warnings and rules forbidding kids from using it are the solutions to the first problem and likely this second one too.
1. "The Tobacco Institute was founded in 1958 as a trade association by cigarette manufacturers, who funded it proportionally to each company's sales. It was initially to supplement the work of the Tobacco Industry Research Committee (TIRC), which later became the Council for Tobacco Research. The TIRC work had been limited to attacking scientific studies that put tobacco in a bad light, and the Tobacco Institute had a broader mission to put out good news about tobacco, especially economic news." [0]
2. "[Lewis Powell] worked for Hunton & Williams, a large law firm in Richmond, Virginia, focusing on corporate law and representing clients such as the Tobacco Institute. His 1971 Powell Memorandum became the blueprint for the rise of the American conservative movement and the formation of a network of influential right-wing think tanks and lobbying organizations, such as The Heritage Foundation and the American Legislative Exchange Council."
France, Germany, UK, Switzerland , Netherlands, Belgium are a few I'm familiar with. There are of course areas of improvement, but in all of those you have strong press that can annihilate politicians for for crimes, as well as more or less working institutions that punish corruption.
Take a look at France, where a former president went to prison. Okay, it got commuted to house arrest (same sentence as a former PM candidate for president), but that's still a pretty serious punishment, especially for a such a high level politician.
> Take a look at France, where a former president went to prison. Okay, it got commuted to house arrest
There is no house arrest, he appealed and is innocent until proven guilty. People stay in prison after appealing in case there is a serious risk of them fleeing the country or in case they present a danger to society, both of these have been deemed low enough
The US-ians voted twice for Trump so far. I have difficulty seeing the good it did for the world , let alone the USA and the US-ians.
Specifically for corporations, giving everyone in the world the power to vote for dismantling Meta (a world mega-corp) might be interesting to see , though.
He is doing good to his supporters, at least as far as they think. He has delivered all sorts of stupid, cruel and self-destructive stuff that they want.
The problem is that they wants have been steered in that direction by decades of cynical media manipulation, but that's just the nature of democracy.
> Meta required users to be caught 17 times attempting to traffic people for sex before it would remove them from its platform, which a document described as “a very, very, very high strike threshold."
I don’t get it. Is sex trafficking driven user growth really so significant for Meta that they would have such a policy ?
The "catching" is probably some kind of automated detection scanner with an algo they don't fully trust to be accurate, so they have some number of "strikes" that will lead to a takedown.
There is always a complexity to this (and don't think I'm defending Meta, who are absolutely toxic).
Like Apple's "scanning for CSAM", and people said "Oh, there's a threshold so it won't false report, you have to have 25+ images (or whatever) before it will"... Like okay, avoid false reporting, but that policy is one messy story away from "Apple says it doesn't care about the first 24 CSAM images on your phone".
We don’t know. But as you read from the article, Meta’s own employees were concerned about it (and many other things). For Zuck it was not a priority, as he said himself.
We can speculate. I think they just did not give a fuck. Usually limiting grooming and abuse of minors requires limiting the access of those minors to various activities on the platform, which means those kids go somewhere else. Meta specifically wanted to promote it’s use among children below 13 to stimulate growth, that often resulting in the platform becoming dangerous for minors was not seen as their problem.
If your company is driven by growth über alles à la venture capitalism, it will mean the growth goes before everything else. Including child safety.
Reading Careless People by Sarah Wynn Williams is eye opening here, and it's pretty close to exactly that.
> I think they just did not give a fuck.
It's that people like Zuck and Sandberg were just so happily ensconced in their happy little worlds of private jets and Davos and etc., that they really could not care less if it wasn't something that affected them (and really, the very vast majority of issues facing Meta, don't affect them, only their bonuses and compensation).
Your actions will lead to active harm? "But not to me, so, so what, if it helps our numbers".
Of course it's not. We could speculate about how to square this with reason and Meta's denial; perhaps some flag associated with sex trafficking had to be hit 17 times, and some people thought the flag was associated with too many other things to lower the threshold. But the bottom line is that hostile characterizations of undisclosed documents aren't presumptively true.
I predict that in much sooner than 100 years social media will be normalized and it will be common knowledge that moderating consumption is just as important as it is with video games, TV, alcohol, and every other chapter of societies going through growing pains of newly introduced forms of entertainment. If you look at some of the old moral panic content about violent video games or TV watching they feel a lot like the lamentations about social media today. Yet generations grew up handling them and society didn’t collapse. Each time there are calls that this time is different than the last.
In some spaces the moral panic has moved beyond social media and now it’s about short form video. Ironically you can find this panic spreading on social media.
We moderate consumption of alcohol, sugar, gambling, and tobacco with taxes and laws. We have regulations on what you can show on TV or films. It is complete misuse of the term to claim a law prohibiting sale of alcohol for minors is ‘moral panic’. It is not some individual decision and we need those regulations to have a functioning society.
Likewise in few generations we hopefully find a way to transfer the cost in medical bills of mental health caused by these companies to be paid by those companies in taxes, like we did with tobacco. At this point using these apps is hopefully seen to be as lame as smoking is today.
Only over the air TV is regulated by the FCC. Films and non broadcast TV are only regulated if they contain obscene content. If anything there was more regulation of film production in the past. Hayes Code etc.
I don't think any of those items have had the significance and decisiveness of social media, or have been controlled by billionaires who have corrupted the election systems.
Social media seems far more dangerous and harder to control because of the power it grants its "friends". It'll be much harder to moderate than anything else you mentioned.
In 100 years time they will be so fried by AI they won't be capable of being shocked. Everyone will just be swiping on generated content in those hover chairs from Wall E.
In Mad Men, we have these little moments of mind=blown by the constant sexism, racism, smoking, alcoholism, even attitudes towards littering. In 2040 someone's going to make a show about the 2010s-2020s and they'll have the same attitude towards social media addiction.
"Priorities" quote:
Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.”
You need to be careful with those arguments because you can fall into the trap of "think of the children" for everything. As an example, I doubt any of us would appreciate any novelist of not focusing on saving children more than on writing books...
> You need to be careful with those arguments because you can fall into the trap of "think of the children" for everything
In a text message in 2021, Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.” Zuckerberg also shot down or ignored requests by Clegg to better fund child safety work.
Fair point, but the fuller context is absurd—the OP's rendering is correct in tone and emphasis.
"It is difficult to get a man to understand something, when his salary depends upon his not understanding it." - Upton Sinclair
HN has seen this quote many times; tech workers willfully or naively ignore the harm their contributions cause as long as the life changing paychecks keep coming, letting them pretend that they are too far removed from the damage to be responsible.
Then comes the classic post “I’m leaving FAANG, so brave of me <quiet-part>funded entirely by the same extraction and harm I once insisted I didn’t see.</quiet-part>"
I quit Facebook in the early to mid 2010s, well before social media became the ridiculously dystopian world it is today.
Completely coincidentally, I had quit smoking a few weeks before.
The feelings of loss, difficulty in sleeping, feeling that something was missing, and strong desire to get back to smoking/FB was almost exactly the same.
And once I got over the hump, the feelings of calm, relaxation, clarity of thought, etc were also similar.
It was then that I learnt, well before anyone really started talking about social media being harmful, that social media (or at least FB…I didn’t really get into any other social media until much later), was literally addictive and probably harmful.
I never really liked fb or any other big application that much, so kicking them after 2016 was not that bad, but I used to be heavy user or forums and kicking some of them felt pretty similar to kicking tobacco back in the day.
We are super social insane monkey creatures that get high on social interaction, which in many ways is a good thing, but can turn into toxic relationships towards family members or even towards a social media application. It is not very dissimilar how coin slot machines or casinos lure you into addiction. They use exactly the same means, therefore they should be regulated like gambling.
Which is why I found it so comparable to quitting smoking.
A smoker doesn’t feel “better” after quitting smoking. Even over a decade after having quit I bet if I smoked a cigarette right now I would feel much nicer than I did right before I smoked it. However, I would notice physiological changes, like a faster heart rate, slight increase in jumpiness, getting upset sooner, etc.
Quitting FB was similar. I didn’t feel “better”, but several psycho-physiological aspects of my body just went down a notch.
So does this apply to all social medias? (Threads, X, Bluesky, IG, etc) how come they didn’t have this evidence from their users well? Or maybe they didn’t bother to ask..
I suppose the harm from social networks is not as pronounced (since you generally interact only with people and content you opted to follow (e.g. Mastodon).
The harm is from designing them to be addictive. Anything intentionally designed to be addictive is harmful. You’re basically hacking people’s brains by exploiting failure modes of the dopamine system.
If I remember correctly, other research has shown that it's not just the addictive piece. The social comparison piece is a big cause, especially for teenagers. This means Instagram, for example, which is highly visual and includes friends and friends-of-fiends, would have a worse effect than, say, Reddit.
I think there’s a difference between something just being a bit addicting and scientifically optimizing something to be addicting. Differences in magnitude do matter because there are thresholds in almost everything where a thing becomes harmful.
Coca leaves can be chewed as a stimulant and it’s relatively harmless, though a bit addictive. Extract cocaine and snort it and it’s a lot more addictive. Turn it into freebase crack and it hits even harder and is even more addictive.
If this is coca leaves, Twitter is cocaine and TikTok is crack.
I had a similar thought. I wonder if any social media on a similar scale as FB/IG would have the same problems and if it's just intrinsic to social media (which is really just a reflection of society where all these harms also exist)
I think group chats (per interest gathering places) without incentives for engagement are the most natural and least likely to cause harm due to the exposure alone.
Big oil, big tobacco, big social, there seems to be a clear pattern of burying evidence of negative impacts of their products to satisfy some personal greed. These people are mentally ill and we need to help them.
> In a 2020 research project code-named “Project Mercury,” Meta (META.O), opens new tab scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook and Instagram, according to Meta documents obtained via discovery.
Did they pick people at random and ask those people to stop for a while, or is this about people who choose to stop for their own reasons?
> To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.
I don't think it's even a stretch at this point to compare Meta to cigarette companies.
Complete with the very expensive defence lawyers, payoffs to government, and waxing poetic about the importance of the foundation of American democracy meaning they must have the freedom to make toxic, addictive products and market them to children, whilst they simultaneously claim of course they would never do that.
Journalist love that study but tend to ignore the likely causal reason for the improved outcomes, which is that users who were paid to stop using Facebook had much lower consumption of news and especially political news.
Meanwhile I'm sitting here deliberating for the 200th time to delete my Whatsapp account, meaning I won't take part in group chats with my friends anymore ... in the end I won't delete it and next up is deliberating for the 201st time to delete my Whatsapp account ...
One of the worst outcomes of the last 20 years is how Big Tech companies have successfully propagandized us that they're neutral arbiters of information, successfully blaming any issues with "The Algorithm" [tm].
Section 230 is meant to be a safe harbor for a platform not to be considered a publisher but where is the line between hosting content and choosing what third-party content people see? I would argue that if you have sufficient content, you could de facto publish any content you want by choosing what people see.
"The Algorithm" is not some magical black box. Everything it does is because some human tinkered with it to produce a certain result. The thumb is constantly being put on the scale to promote or downrank certain content. As we're seeing in recent years, this is done to cozy up to certain administrations.
The First Amendment really is a double-edged sword here because I think these companies absolutely encourage unhealthy behavior and destructive content to a wide range of people, including minors.
I can't but help consider the contrast with China who heavily regulate this sort of thing. Yes, China also suppresses any politically sensitive content but, I hate to break it to you, so does every US social media company.
Your solution to the government putting pressure on social media companies to censor is to give the government more power over them by removing section 230?
I'm saying social media companies are using Section 230 as a shield with the illusion of "neutrality" when they're anything but. And if they're taking a very non-neutral stance on content, which they are, they should be treated as a publisher not a platform.
Of course they did. Anyone not blind to what is going on knows this, of course. It is merely a matter of proving it in front of the law. That's all this is about. It's no longer about the question whether or not they acted despicably.
I doubt serious consequences will follow this time, as there haven't been following serious consequences all the previous times Meta/Facebook has been found guilty of crimes. However, it can serve as one more event to point out to naive people, who don't want to believe the IT person, that FB/Meta is evil, because they don't want to give up some interaction on it, or some comfort they have, using FB/Meta's apps or tools. I think it's a natural tendency most of us have. We use something, then we want extra good proof when someone claims that thing is bad, because we don't want to change and stop using the thing. Plus FB/Meta will do anything they can, to make people addicted to their platforms.
"Social media harm" sounds like one of these nebulous things which has no real definition
"Social media was a mistake, just like the internet" oh ok so we should just give up our gmails and reddits and everything because people insist on the widest possible swathe of categories
But actually when it comes to Metabook... I don't think Zuckerberg cares about anybody, and more to the point they refuse to give you a chronological service just for starters
People have died and their friends haven't known about it because the algorithm never showed them. People have noticed messages they've got from people trying to get in touch with them years later, because Zuck feels you should be using Facebook all the time, not email https://news.ycombinator.com/item?id=4151433
When your company is run by a megalomaniac this is what you get...
> In a text message in 2021, Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.”
> Zuckerberg also shot down or ignored requests by Clegg to better fund child safety work.
They should have been shutdown and all the C-Level exec arrested after Cambridge Analytica. The weapons grade psyops they used too get Trump elected are crimes against humanity.
Meta is Zuck. Zuck is bad. Accept it everyone. Why people hate Elon Musk but not Zuck is beyond me. Zuck has done real harm as
well, some of it worse than Musk.
The usual reminders apply: you can allege pretty much anything in such a brief, and "court filing" does not endow the argument with authority. And, the press corps is constrained for space, so their summary of a 230-page brief is necessarily lacking.
The converse story about the defendants' briefs would have the headline "Plaintiffs full of shit, US court filing alleges" but you wouldn't take Meta's defense at face value either, I assume.
Every time they contact me I tell Meta recruiters that I wouldn't stoop to work for a B-list chucklehead like Zuck, and that has been my policy for over 15 years, so no.
You're not speaking to a jury. Regular people just living their lives only have to use their best judgment and life experience to decide which side they think is right. We don't need to be coerced into neutrality just because neither side has presented hard proof.
These discussions never discuss the priors, is this harm on a different scale then what preceded it? Like is social media worse than MTV or teen magazines?
I loved MTV as a kid but it was as different to social media as can be.
Half the time you would turn it on and not like the video playing then switch the channel. Even if you liked the video that was playing, half the time the next video was something you didn't like so you would switch the channel.
Now imagine if MTV had used machine learning to predict the next video to show me personally that would best cause me to not change the channel.
That is not even really a different scale but a different category.
Why does it matter? We can’t go back and retroactively punish MTV for its behavior decades ago. Not to mention we likely have a much better understand of the impact of media on mental health now than we did then.
The best time to start doing the right thing is now. Unless the argument here is “since people got away with it before it’s not fair to punish people now.”
What policy proposals would you have made with respect to MTV decades ago, and how would people at the time have reacted to them? MTV peaked (I think) before I was alive or at least old enough to have formative memories involving it, but people have been complaining about television being brain-rotting for many decades and I'm sure there was political pressure against MTV's programming on some grounds or another, by stodgy cultural conservatives who hated freedom of expression or challenges to their dogma. Were they correct? Would it have been good for the US federal government in the 80s and 90s to have actually imposed meaningful legal censorship on MTV for the benefit of the mental health of its youth audience?
I think passively watching something on television is very different from today’s highly interactive social media. Like instagram is literally a small percentage people becoming superstars for their looks and lifestyles and kids are expected to play along..
> people have been complaining about television being brain-rotting for many decades
This was a broad, simplified, unsupported claim that cannot be compared to the demonstrable, well-studied impacts of social media on people’s - especially young people’s - minds. They are not even remotely on the same level.
If we want to debate MTV specifically yes there are well studied, proven impacts of how various media can make people think of their own bodies and lives etc. that can be harmful. But again it’s not remotely to the same degree. Social media can be uniquely poisonous. There are a myriad of studies out there that confirm this but I’m happy to link some if you want me to.
If somebody wanted to it would probably not be very difficult to write an article all but conclusively proving that Instagram is more harmful than MTV.
It matters because it points towards a common failure mode which we've seen repeatedly in the past. In the 1990s, people routinely published news articles like the OP (e.g. https://www.nytimes.com/1999/04/26/business/technology-digit...) about how researchers "knew" that violent video games were causing harm and the dastardly companies producing them ignored the evidence. In the 1980s, those same articles (https://www.nytimes.com/1983/07/31/arts/tv-view-the-networks...) were published about television: why won't the networks acknowledge the plain, obvious fact that showing violence on TV makes violence more acceptable in real life?
Is the evidence better this time, and the argument for corporate misconduct more ironclad? Maybe, I guess, but I'm skeptical.
I don’t understand why things like social media are meant to be regulated by the government.
Isn’t religion where we culturally put “not doing things that are bad for you”? And everyone is allowed to have a different version of that?
Maybe instead of regulating social media, we should be looking at where the teeth of religion went even in our separation of church and state society. If everyone thinks their kids shouldn’t do something, enforcing that sounds like exactly what purpose religion is practically useful for.
Well, the more scientific and pluralistic our society becomes the more religion is necessarily sapped of its ability to compel behavior. If you lived in 13th Century France the Catholic Church was a total cultural force and thus could regulate behavior, but the very act of writing freedom of religion into law communicates a certain idea about religion: its so unimportant that you can have whatever form of religion you want.
In any case, one ought to distinguish between "You shouldn't do things which are bad for you," and "You shouldn't do things you know are bad for others." Especially, "Giant corporations with ambiguous structures of responsibility shouldn't be allowed to do things which are bad for others."
Are you serious? People don't need religion to be moral. If what I see from religion these days is any indicator, I am extremely happy we kept our kids far far far away from it. From all of it. I will concede that not all religion is bad, but quite a lot of it is grift at best and cleverly disguised totalitarianism at worst. Many religious figures have absolutely no problem talking publicly about their "diety-given" right to dominate and control the lives of others for their own personal gain. I don't see how that fits inside any accepted definition of morality.
There are certain statements that should make you wary of study findings.
People who x reported y is one of those phrases.
“people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,”
This is the same argument you see in cosmetic advertising as "Women who used this serum reported reduction in wrinkles"
If the study has evidence that people who x actually shows y, It would be irresponsible to not say that directly. Dropping to "people reported" seems like an admission that there was no measurable effect other than the influence of the researchers on the opinions of the subjects.
Mental state can be difficult in this respect because it is much harder to objectively measure internal states. the fact that it is harder to do, doesn't grant validity to subjective answers though.
I was once part of a study that did this. It was fascinating seeing something that appeared to have no effect being written up using both "people reported" and "significant" (meaning, not likely by chance, but implying a large effect to the casual reader).
Dude, who cares about study design and methodological validity! Let's just burn Meta down and put Zuck to jail! /s
What you a saying is valid criticism of the study but people here already made up their mind, so they downvote.
Another point to add is that 1 week is way too short - assuming there is an effect it might disappear or go in reverse after 1 month.
To all downvoters: if you think of yourself as smart rational people, please just use search/AI to see for yourself whether there is high quality evidence of _causal_ impact of social media on kids/mental health. The results are mixed at best.
Interestingly the post is climbing it's way back to zero.
I find the downvote without counterargument to be an odd response to a good faith post. It seems like it would strengthen the argument if the message they send is "I don't have a counter to this but I don't like it and I don't like that others will see this point of view"
I have come to realise that I have a much higher threshold when it comes to upvoting, downvoting, or rating things. It seems like a lot of people freely upvote, like, heart, or downvote without a care. We live in a world where a 4.8 star rating (comprised entirely of an aggregate of zero and five star ratings) is considered a concern. So I try not to be bothered by it, but I'm pretty sure subconsciously a downvote hurts more than someone saying "I disagree"
I’ve recently had to deal with my father cognitive decline & falling for scams left & right using Meta’s apps. This has been so hard on our family. I did a search the other day on marketplace and 100% of all sellers were scams, 20-30 of them.
Meta is a cancer on our society, I’m shutting down all my accounts. Back when TV/Radio/News paper were how you consumed news, you couldn’t get scams this bad at this scale. Our parents dealt with their parents so much easier as they cognitively declined. We need legal protections for elders and youth online more than ever. Companies need to be liable for their ads and scam accounts. Then you’d see a better internet.
My grandmother has been through the same thing. She was scammed out of all of her savings by accounts impersonating a particular celebrity. Thankfully the bank returned all of the money, but the perpetrators will never be caught, they operate out of Nigeria (one of them attached their phone to her Google account.)
Unfortunately these fake celebrity accounts are swarming her like locusts again. We tried to educate her about not using her real name online, not giving out information or adding unknown people as friends, but there's a very sad possibility that she doesn't fully understand what she's doing.
It was emotionally difficult going through her laptop to gather evidence for the bank. They know exactly how to romance and pull on heart strings, particularly with elderly people.
Meta's platforms are a hive of scammers and they should be held accountable.
> adding unknown people as friends
The number of my outer circle of friends who fall for the “copied profile” adding of unknown people or accept a friend request from the attractive young woman who somehow is interested in them is shocking. (I’m gauging this from looking at the “mutual friends” in the friend request.)
My friend is a bank manager. He says everyday 2-3 elderly people come in confused about a scam.
This is a silent crisis impacting almost eveyone. My grandma personally had her gold stolen by a scammer.
She is now in a home for dimensia.
I don’t think it’s a silent crisis per se, but just one people ignore.
There’s tons of media about it, tons of people are aware of elder fraud etc but people don’t want to think about the vulnerable of society. There’s been jokes about it and media about it going back decades.
People are aware but solving it requires an uncomfortable level of change in society, training and regulations.
As an aside, both Thelma and The Beekeeper are recent movies about elders being scammed and revenge being taken. Both very different but enjoyable.
Cable media is filled with ads for scams purporting to prevent other scams.
[flagged]
This shows profound ignorance of elderly people.
People survived with quite severe dementia hundreds of years ago. It doesn’t necessarily imply the rest of the body is unhealthy just their brain in a very specific way.
I hope you never have to experience the heartache and anguish that comes with a relative going through cognitive decline.
It really is a silent crisis. I warn my family constantly about ones targeting elderly, but even people my age fall for others
My dad had fallen for two scams - one through WhatsApp, the other texts.
I’m not sure how much we can blame individual companies for this. Obviously they should be doing more - shutting down accounts that message people at random, for instance, but I feel like the scammers will find a way.
I also don’t know what else we can do. It should be easier for kids (or anyone else) to shut down their parent’s accounts at least once this happens, stop all wire and crypto transfers, etc.
Past that, I really don’t know.
Unfortunately I have a similar experience. If someone's working at Meta right now, and has been in the past 10 years, they're willingly and actively contributing to making society worse. Some open-source tech is not going to undo any of this, nor any of the past transgressions. I get the pay is probably great, but have some decency.
I suggested a hiring ban on anyone who ever worked at Meta some years back. It was not met with open arms. Going to try again here...
I think it's a valid suggestion that might result in people rethinking working for Meta if it was taken seriously.
Working for Meta is ethically questionable. The company does unspeakable damage to our country. It harms our kids, our elders, our political stability. Working for it, and a number of similar companies, is contributing to the breakdown of the fabric of our society.
Why not build a list of Meta employees and tell them they're not eligible for being hired unless they show some kind of remorse or restitution?
It could be an aggregation of LinkedIn profiles and would call attention to the quandary of hiring someone with questionable ethics to work at your organization. It might go viral on the audacity of the idea alone. That might cause some panic and some pause amongst prospective Meta hires and interns. They might rethink their career choices.
Generally it is a bad idea to punnish defectors.
What's the end goal of that? Do you think Meta will run out of good engineers to hire ?
With that attitude, how long does it take to justify going after the next Meta?
My litmus test is, do you think that the person managing Meta’s coffee supply is ethically questionable? If you met them, would you tell them that they need to quit, and would you consider them a bad person if they don’t? There are organizations that meet that bar, but I really don’t think Meta is one of them.
But hey, at least the money is good..
One must also check what YouTube recommends their elderly parents, because it is easy for them to slide into getting recommended harmful content, mostly things like psychological, religious or alternative-medicine topics. Note that not all of them are harmful, but most of them are published by very odd channels.
Opening YouTube on a new machine / OS / browser / without login is eye opening in terms of the awful stuff that gets recommended by default and how quickly it tilts worse if you watch any of it.
This, so much! It's outright disgusting. I have no idea why we tolerate this as a society. I fear it is because this diagnosis isn't widely known, it's happening on the fringes.
Everybody, including journalists and tech people, is moving about their own algorithmic bubble nearly all the time. They just can't imagine how bad the situation has become out there. We're turning a blind eye to the very thing that is destroying our societies.
YouTube should be held liable for what it is pushing. It literally can kill and seriously harm people.
I think that any of these algorithmic feeds, by any company, should be held as if the companies have vetted the content and it is theirs. And the culpability that goes with that.
The president of the United States of America pushed a horse de-wormer as a preventative during a world-wide pandemic
Good luck getting him, his administration, or his Department of Justice, to hold YouTube to a higher standard.
Well, let’s post a deepfake about some left strawman and watch him find the time pronto.
[dead]
youtube also has kitboga, pirogi, deeveeaar, etc which are very helpful. i introduced my mother, who has early dementia and can't do much, so watches a lot of netflix and youtube, to kitboga and she loved it and found other scambaiters. i'm stoked. i know she will tell a scammer to f off now.
this seems like credit bureaus charging us to protect our data they keep losing.
The old, mentally disabled guy in New Jersey falling over and dying trying to get to a date with a meta bot really broke something in me.
That was horrible. This also makes me think that all those researches on "unhappiness Vs spending"
So many of us have been there - it is brutal. These platforms are ripping us apart from each other, providing criminals easy access to the most vulnerable, and concentrating wealth to an unimaginable degree.
But hey, it's a free market /s
Maybe EU's regulation of digital markets isn't such a bad idea after all.
What did you search for on marketplace to find the scams?
One third of all scams in the US are operated on Meta platforms.
They have a policy that if a scammer’s ad spend makes up more than 0.15% of Meta revenue, moderators must protect the scammer instead of blocking it.
Meta is working hard to scam your dad for ad spend. It’s hugely profitable for them and they are helping to grow it per internal policy. They are only interested in fostering big-time scammers.
I would like to understand the downvotes: is it from doubting these facts? If so, I will post the sources (which were recent mainstream news on the front page of HN). Or is it because of the negative sentiment about Meta? Or disagreement that Meta has any responsibility over moderating scams they promote?
These are verified facts that make up the substance of my message:
- Meta protects their biggest scammers, per internal policy from leadership
- Meta makes a huge profit from these scammers (10% of total revenue; or in other words, their scam revenue is approximately 5x larger than the total Oculus revenue)
- The scams that Meta promotes represent one-third of the total online scams in the US
> One third of all scams in the US are operated on Meta platforms.
And 100% of all internet scam traffic in the US goes through either US ISPs or US cell carriers.
Should those entities be held liable instead? Or maybe, Meta instead should scan users' private messages on their platforms and report everything that might seem problematic (whatever the current US administration in power considers as problematic) to the relevant authorities?
My personal take: there should be more effort in going after the actual scammers, as opposed to going after the "data pipes" (of various abstraction levels) like Meta/ISPs/cell carriers/etc.
Meta is not a pipe. Meta curates the feed to maximise their income to the detriment of everyone else.
> Meta projected 10% of its 2024 revenue would come from ads for scams and banned goods
https://www.reuters.com/investigations/meta-is-earning-fortu...
If the ISP was taking ad money they know were scams...yes they should be liable.
International law and extradition has already proven to be too slow and small scale to be effective.
> We need legal protections for elders and youth
Offline too.
Predation on the elderly is an industry.
Our own attempts to do something about (successful) scammers were meant with utter indifference by my parent's state's (Arizona) attorney general, county sheriffs, local police.
If you really want to hurt Meta, don't delete your accounts - sell these real, aged accounts to spammers for a few bucks.
That may hurt Meta, but not nearly as much as it hurts the elderly people who the spammers will defraud.
Then instead use them to scrape your friends' timelines and republish as RSS.
Why would that hurt Meta? The entire point here is that they don't care and if anything benefit from such activity.
I’m in a group chat and one member is a Cambodian slave that periodically tries to start romance scams
and we’re like “you’re free now, go home” (because of the economic sanctions and raid)
we recently had a vote on whether she should be booted from the chat, we voted no for the comedic value
so anyway sorry you’re going through that, its wild out there
At this point, I think all of the big tech companies have had some accusations of them acting unethically, but usually, the accusations are around them acting anticompetitively or issues around privacy.
Meta (and social media more broadly) are the only case where we have (in my opinion) substantiated allegations of a company being aware of a large, negative impact on society (mental wellness, of teens no less), and still prioritizing growth and profit. The mix is usually: grow at all costs mindset, being "data-driven", optimizing for engagement/addiction, and monetizing via ads. The center of gravity of this has all been Meta (and social media), but that thinking has permeated lots of other tech as well.
We have evidence for this in other companies too. Oil & Gas and Tobacco companies are top of mind.
Don’t forget the All-Fats-Are-Bad sugar scam.
It's a well worn playbook by now. But Meta seems to be the only one where we now have proof of internal research being scuttled for showing the inconvenient truth.
True, but there haven't even been publicly known internal research attempts at for example YouTube/Google about the content they are pushing and probably more importantly the ads they keep pushing into people's faces. I bet FB/Meta are kicking themselves now, for even running such internal research in the first place.
My point is, that all of these big tech giants will find, that they are a harmful cancer to society, at least in parts. Which is probably why they don't even "research" it. This way they can continue to act oblivious to this fact.
> I bet FB/Meta are kicking themselves now, for even running such internal research in the first place.
100%. This is what people miss in this thread when they're talking about seeing to punish companies who knowingly harm society. All that is going to do is discourage companies from ever seeking to evaluate the effects that they're having.
Then internal evaluation must be made mandatory. This is something that can be regulated, there just isn't the will for it.
Won't the absence of punishing companies that knowingly harm society in a way encourage more of the same conduct? What's your suggestion?
The tobacco industry also did that, but in many ways it also seems different, because where tobacco was something that has existed for millennia and was a scourge introduced to the world by the tribes of the “new world”; Facebook was a primary player in creating the whole social media space, something that effectively did not exist in the predatory and malignant manner that it was used for to create a digital panopticon, or more accurately and way worse, where your participation is required for a certain kind of success.
Social media is abusive and utterly psychotic and narcissistic, because that is the type of people who created it using basic psychological abuse and submission tactics. Banks, casinos, games, hollywood/TV, news/politics, social media, contemporary academia and religion, etc.; they all function on being endorphin dealers/dispensers.
Petrochemical, Dow & Industrial Big Chem, Pharmaceutical companies, health insurance companies, finance companies, Monsanto, mining companies.
I mean, let's be real. That's really isn't a big company that achieves scale that doesn't have skeletons in the closet. Period.
What do you think the social effects of large scale advertising are? The whole point is to create false demand essentially driving discontent. I've no idea if Google et al have ever done a formal internal study on the consequences, but it's not hard to predict what the result would be.
The internet can provide an immense amount of good for society, but if we net it on overall impact, I suspect that the internet has overall had a severely negative impact on society. And this effect is being magnified by companies who certainly know that what they're doing is socially detrimental, but they're making tons of money doing it.
I agree false demand effects exist. But sometimes ads tell you about products which genuinely improve your life. Or just tell you "this company is willing to spend a lot on ads, they're not just a fly-by-night operation".
One hypothesis for why Africa is underdeveloped is they have too many inefficient mom-and-pop businesses selling uneven-quality products, and not enough major brands working to build strong reputations and exploit economies of scale.
> But sometimes ads tell you about products which genuinely improve your life.
I’d argue that life improvement is so small it’s not worth the damage of false demand. I can maybe think of one product that I saw a random ad for that I actually still use today. I’d say >90% of products being advertised these days are pointless garbage or actually net negative.
Advertising is cancer for the mind and our society severely underestimates the harm it’s done.
The positive benefits in education, science research and logistics are hard to understate. Mass advertising existed before the internet. Can you be more explicit about which downsides you thibk the additional mass advertising on the internet caused that can come anywhere close to the immeasurable benefits provided by the internet?
I'm somewhat unsurprised that my off the cuff hypothesis has been tested, and is indeed likely accurate. [1] Advertising literally makes people dissatisfied with their lives. And it's extremely easy to see the causal relationship for why this is. Companies like Google are certainly 100% aware of this. And saying that advertising existed before the internet is somewhat flippant. Obviously it did but the scale has increased so dramatically much that it's reaching the point of absurdity.
And a practical point on this topic is that the benefits of the internet are, in practice, fringe, even if freely available to everyone. For instance now there are free classes from most of all top universities online, on just about every topic, that people can enroll and participate in. There are literally 0 barriers to receiving a free premium quality education. Yet the number of people that participate in this is negligible and overwhelmingly composed of people that would have had no less success even prior to the internet.
By contrast the negatives are extremely widespread on both an individual and social level. As my post count should demonstrate, I love the internet. And obviously this site is just one small segment of all the things I do on the internet. In fact my current living would be impossible without it. Yet if I had the choice of pushing a button that would send humanity on a trajectory where we sidestep (or move along from) the internet, I wouldn't hesitate in the slightest to push it.
[1] - https://hbr.org/2020/01/advertising-makes-us-unhappy
It's on the same scale of chemical companies covering up cancerous forever chemicals.
Cigarette companies hiding known addictive effects?
And more recently, pretending vapes are a solution to cigarettes.
PG/VG base is exactly the same stuff that has been used in foggers/hazers for decades. If there were negative health effects associated with the stuff, we'd have spotted it long ago. As for nicotine, well, it's the same stuff as in cigarettes, we know about its effects again thanks to decades of research.
The only thing left is questionable flavoring agents and dodgy shops with THC oil vapes (although that kind of contamination is now known and it's been ages since I last heard anything).
At large, vapes are better than cigarettes.
>PG/VG base is exactly the same stuff that has been used in foggers/hazers for decades. If there were negative health effects associated with the stuff, we'd have spotted it long ago.
How many people are directly exposed to it daily? Technicians and performers are probably it. Everyone else is very rare so it's possibly any side effects took a while for medical community to pick up on until everyone started vaping.
>At large, vapes are better than cigarettes.
Better yes, they are harm reduction over cigarettes. However, it's not "good" and should be as regulated as cigarettes are.
Cite?
It wasn't inhaled in the way vapes are. The dose is higher and the exposure is chronic.
There is zero comparison. Atmospheric 'fog' versus closed system directly into lungs with intention of cellular respiration is the same thing.
Before this the pro-vape crowd used to push the trope of "it's used in nebulizers", nope, it's not. Ventolin does not use propylene glycol: https://www.drugs.com/pro/ventolin.html Maxair? Nope: https://www.drugs.com/pro/maxair-autohaler.html Airomir did not.
> There is one study looking at the potential to use PG as a carrier for an inhaled medicine (https://www.ncbi.nlm.nih.gov/pubmed/18158714) and another which mentions that PG or ethanol may be used as a cosolvent (https://www.ncbi.nlm.nih.gov/pubmed/12425745) in nebulizers, but no evidence presented of an asthma inhaler or nebulizer that is actually used today containing PG.
Even then, there's a huge difference between "being on stage with a fog machine", and 3-4 puffs a day of a smaller amount of a nebulizer, than chronic hundreds of puffs a day with vapes.
> Meta are the only case where we have substantiated allegations of a company being aware of a large, negative impact on society
Robinhood has entered the chat
Why would one specific industry be better? The toxic people will migrate to that industry and profit at the expense of society. It’s market efficiency at work.
I do think an industry is often shaped by the early leaders or group of people around them. Those people shape the dominant company in that space, and then go off to spread that culture in other companies that they start or join. And, competitors are often looking to the dominant company and trying to emulate that company.
not sure how much sense that makes when the overarching culture is profit seeking
> I do think an industry is often shaped by the early leaders or group of people around them
Yes, but did any industry live long enough to not become the villain?
Early OpenAI set the tone of safe, open-source AI.
The next few competitors also followed OpenAI’s lead.
And yet, here we are.
> Early OpenAI set the tone of safe, open-source AI.
Early OpenAI told a bunch of lies that even (some of) their most-ardent fans are now seeing through. They didn't start off good and become the villain.
> Early OpenAI set the tone of safe, open-source AI.
Um, wat?
> Um, wat?
https://github.com/openai
For the uninformed, what large negative impact has Robinhood had on society?
Gamifying day trading is just turning the retail market into gambling. Obvious objections will be that this has been possible for a long time now. But never did I know young men to casually play the market day to day like Wall Street Bets do now the way they would follow sports in the past.
Exploring unsophisticated investors. Trading on margin used to be for extremely experienced and educated people working for a large financial institution. The risk of margin trading is extreme with unlimited losses.
https://www.nbcnews.com/business/business-news/gambling-addi...
tip of the iceberg.
Gamifying and advertising the shit out of options trading to make it more attractive to morons isn't, strictly speaking, an improvement of our world.
Also, tobacco companies and oil companies famously got into trouble from revelations that they were perfectly aware of their negative impacts. For the gambling and alcohol industry, it probably wouldn't even make the news if some internal report leaked that they were "aware" of their negative impact (as if anyone thought they would not be?)
Social media is way down on the list of companies aware of their negative impact. The negative impact arguably isn't even central to their business model, which it certainly is for the other industries mentioned.
The leaders and one of the announcers of Radio Télévision Libre des Mille Collines got 30 years to life sentences for their part in the Rwandan genocide.
> Meta (and social media more broadly) are the only case where we have (in my opinion) substantiated allegations of a company being aware of a large, negative impact on society (mental wellness, of teens no less), and still prioritizing growth and profit
Them doing nothing about hate speech that fanned the flames for a full blown genocide is pretty terrible too. They knew the risks, were warned, yet still didn't do anything. It would be unfair to say the Rohingya genocide is the fault of Meta, but they definitely contributed way too much.
We all know this. As people in the tech industry. As people on this website. We know this. The question is, what are we going to do about it? We spend enough time complaining or saying "I'm going to quit facebook" but there's Instagram and Threads and whatever else. And you alone quitting isn't enough. We have to help the people really suffering. We can sometimes equate social media to cigarettes or alcohol and relate the addictive parts of that but we have to acknowledge tools for communication and community are useful, if not even vital in this day and age. We have to find a way to separate the good from the bad and actively create alternatives. It does not mean you create a better cigarette or ban alcohol for minors. It means you use things for their intended purpose.
We can strip systems like X, Instagram, Facebook, Youtube, TikTok, etc of their addictive parts and get back to utility to value. We can have systems not owned by US corporations that are fundamentally valuable to society. But it requires us, the tech savvy engineering folk to make those leaps. Because the rest of society can't do it. We are in the position of power. We have the ability.
We can do something about it.
I wrote something to that effect two days ago on a platform I'm building. https://mu.xyz/post?id=1763732217570513817
At the moment the biggest hope I have is there’s client side tech that protects us from these dark patterns. But I suspect they’ll have their own dark patterns to make them profitable.
I guess we can speculate or theorise on potential strategies but beyond hope we should also try to do something. I have seen some X clones with variations but a lot of the same behaviour plays out when you have no rules around posting, moderation, types of content, etc. Effectively these platforms end up in the same place of gamification and driving engagement through addictive behaviours because they want users. Essentially I think true community is different, true community keeps each other accountable and in check. Somehow we need to get back to some of that. Maybe co-operative led tools. Non profits. I think Mastodon meant well and didn't end up in the right place. Element/Matrix is OK but again doesn't feel quite right. Maybe we should never try to replicate what was, I don't know. BitChat (https://bitchat.free/) is an interesting alternative from Jack Dorsey - who I think is trying to fix the loss of Twitter and the stronghold of WhatsApp.
Companies can't really be expected to police themselves.
I remember reading that oil companies were aware of global warming in internal literature even back in the 80's
> Companies can't really be expected to police themselves.
Not so long as we don't punish them for failure to. We need a corporate death penalty for an organization that, say, knowingly conspires to destroy the planet's habitability. Then the bean counters might calculate the risk of doing so as unacceptable. We're so ready and willing to punish individuals for harm they do to other individuals, but if you get together in a group then suddenly you can plot the downfall of civilization and get a light fine and carry on.
Corporate death penalty as in terminate the corporation?
Why not the actual death penalty? Or put another way, why not sanctions on the individuals these entities are made up of? It strikes me that qualified immunity for police/government officials and the protections of hiding behind incorporation serve the same purpose - little to no individual accountability when these entities do wrong. Piercing the corporate veil and pursuing a loss of qualified immunity are both difficult - in some cases, often impossible - to accomplish in court, thus incentivizing bad behavior for individuals with those protections.
Maybe a reform of those ideas or protocols would be useful and address the tension you highlight between how we treat "individuals" vs individuals acting in the name of particular entities.
As an aside, both protections have interesting nuances and commonalities. I believe they also highlight another tension (on the flip-side of punishment) between the ability of regular people to hold individuals at these entities accountable in civil suits vs the government maintaining a monopoly on going after individuals. This monopoly can easily lead to corruption (obvious in the qualified immunity case, less obvious but still blatant in the corporate case, where these entities and their officers give politicians and prosecutors millions and millions of dollars).
As George Carlin said, it's a big club. And you ain't in it.
In my conception, part of the corporate death penalty would be personal asset forfeitures and prison time for individuals who knew or should have known about the malfeasance.
> prison time for individuals
Corporal punishment exists for individuals too.
Perhaps it should be on the table for executives (etc) whose companies knowingly caused the deaths or other horrific outcomes for many, many people?
This is what China does. The problem is that the application is a little, uh, selective. As soon as you get any kind of corruption it becomes a power play between different factions in the elites.
You can't do any of this without a strong, independent, judiciary, strongly resistant to corruption. Making that happen is harder than it sounds.
And it still won't help, because the perps are sociopaths and they can't process consequences. So it's not a deterrent.
The only effective way to deal with this is to bar certain personality types from positions of power.
You might think that sounds outrageous, but we effectively have that today, only in reverse. People with strong moral codes are actively excluded from senior management.
It's a covert farming process that excludes those who would use corporate power constructively rather than abusing it for short-term gain.
In these cases, what is prison time going to accomplish that a severe enough monetary remedy would not? Putting someone in a prison cell is a state power (criminal remedy). I think that is a useful distinction generally, and a power that should be employed only when legitimized through some government process which has a very high bar (beyond a reasonable doubt, criminal rules of evidence, protections against self incrimination etc), as it deprives someone of their physical liberty.
It strikes me that if you also appreciate this distinction, then your remedy to corporations that have too much power is to give the government even more power?
Personally, I would like to see more creative solutions that weaken both government and corporations and empower individuals to hold either accountable. I think the current gap between individuals and the other two is too severe, I'm not sure how making the government even more powerful actually helps the individual. Do you want the current American government to be more powerful? Would your answer have been different last year?
I do not see any equivalence between corporate power and government power. The population as a whole controls government power. Corporate power is constrained only by government power. I think one of the most pernicious notions in our society is that the idea that "the government" is something separate from ordinary people.
Of course, our current government has a lot of problems, but that doesn't mean I don't want the government to have power. I just want it to have power to do what the population actually wants it to do (or, perhaps, what the population will actually be happiest with).
What would be your proposed mechanism for empowering individuals? How would such a mechanism not ultimately rely on the individual leveraging some larger external power structure (like a government)? I think if we want to empower all individuals roughly equally (i.e., not in proportion to their wealth or the like), then what we wind up with is something I'd call a government. Definitely not the one we have, but government nonetheless.
It's a fair rejoinder, except I think it mixes idealism about government for realism. In reality, the government becomes an entity unto itself. This is a universal problem of government. Democratic institutions are themselves supposed to be a check on this impulse. However, as you are aware these are not absolute. A check that foresees a need to restrain government also sees a need to empower the government to restrain people.
I think however when we acknowledge that men are not angels, and that therefore government itself is dangerous merely as a centralization of power, then no, you cannot simply say well government is supposed to be of a different type of power than corporations. Because again, in reality this is often not the case. This is why several of the American founders and many of those who fought in that revolution also became anti federalists or argued against constitutional ratification.
I don't know what the answer is, but I don't think there has ever been a situation where it is accurate to say the population as a whole controls the government. In practice it doesn't work that way, and is about as useful as saying well the market controls corporations. I think something more like anti federalism could use a renaissance... the government should be weak in more cases. Individuals should be empowered. A government power to hold a corporation accountable could then rest on simply its strict duty to enforce a civil remedy. That is of a different nature than the government deciding on its own who (and more importantly - who not) to prosecute.
But I appreciate your push back, there are indeed no easy answers.
Bullshit. I have no control whatsoever over the government. It is completely separate from me. I have 1000x more power over Amazon by my ability to choose to not buy from them than my vote gives me over government bureaucracy. That's why whenever I have a problem with an Amazon order it is resolved in minutes when I contact support. Good luck if you have a problem with the government.
Amazon are not resolving your issue in minutes because you have power over them. They do it because it is efficient and profitible for them to keep customers happy. Your actual influence over a trillion dollar company is tiny compaired to your influence as a voter. One customer taking there business elsewhere does not affect Amazon in any meaningful way. One vote is counted directly. The gap is between how it feels and how the power actually works. This of course assumes you live in a democratic country.
Your firat two sentences are a total contradiction
AMZN shareholders shiver by the sheer control you have over them. Will he return that usb dongle?
Hah. Try the same with Google now. Getting a problem resolved with them as a consumer is a cakewalk compared to the government.
You are a user of Google, but you probably aren't a customer.
Just nationalize the company. Make shareholders fear this so much that they keep executives in check.
My view is that the corporate death penalty is either dissolution or nationalization, whichever is less disruptive. If you make your company "too big to fail" without hurting loads of people, then use it to hurt people, then the people get your company. If it's a smaller operation it can just go poof. The priority should be ensuring the bad behavior is stopped, then that harm is rectified, and finally that an example be made to anyone else with a clever new way to externalize harm as a business model.
Sounds like a very extreme remedy. Not sure you want whatever government is elected every four years to have this power. Doesn't address the concern re regulatory capture, could lead to worse government incentives. Why not focus on allowing regular people to more realistically hold corporations and their owners/officers liable in civil courts? It's already hard enough given the imbalance of funds, access and power... but often legal doctrine makes the bar to clear impossible at the outset.
I would posit that we are in the current political situation precisely because we do not hold the capital class accountable. Do you sincerely believe that investors losing their investment is a “very extreme” response to gross corporate lawbreaking on their behalf?
We are in this situation because we elect people who do not hold the capital class accountable. Look at the people we elect. How would them running companies be any better?
The capital class chooses and presents the people you can vote for. They decide what issues are talked about in the media, they decide who gets the most funding, and they probably have ways of getting rid of or corrupt the people who somehow get popular without first being accepted by at least some people from the capital class.
We are in the situation because the capital class have turned the people we elect into servile puppets. Because they have simply been allowed to become too big and powerful.
I disagree with you there. We need to stop infantilising politicians.
They aren't servile puppets because they are children, they are servile puppets because that's what they are paid (and threatened, via financing their more pliable opponents) to do.
Why not make the civil case path easier then? The extreme nature of your remedy is the idea of a government taking over and owning a corporation. That creates bad incentives. I think if individuals could reasonably expect to be able to knock people like Mark Zuckerberg out of the billionaire class in a civil suit, then yes, he and the types of people he represents would behave better. Having the government run Facebook or Enron or Google or whatever both sounds less desirable than empowering individuals and weakening corporate protections in civil cases, and frankly; worse than the prevailing situation re the "capital class". If you think the current political situation is bad the last thing you should want is more government power.
So punish the owners of the company because it's harmful, but keep the harmful company around just now controlled by the government?
Sometimes harm is a matter of degree and intent
Doctors selling you fentanyl so you can be sedated for surgery is a good thing.
Drug Dealers selling you fentanyl so you can get high is a bad thing
Except drug dealers do not sell you fentanyl just so you can get high because they do not care. They do not care about YOUR OWN intention. People demand, they supply. And these people can have legitimate reasons.
You ever been approached by anyone selling drugs?
Of course they care about you getting high, that's their sales pitch
What would they fear about it? Nationalisation would include compensation (as per relevant laws), so the shareholders don't lose a lot. Maybe the compensation would be less than the potential highs of the stock price, but it's not like they entirely lose out
The actual death penalty is not a good idea for several reasons, including possibility of error (even if that possibility is small).
(In the case of a corporation, also many people might be involved, some of whom might not know what it is, therefore increasing the possibility of error.)
However, terminating the corporation might help (combined with fines if they had earned any profit from it so far), if there is not an effective and practical lesser punishment which would prevent this harm.
However, your other ideas seem to be valid points; one thing that you mention is, government monopoly can (and does) lead to corruption (although not only this specific kind).
“It is forbidden to kill; therefore all murderers are punished unless they kill in large numbers and to the sound of trumpets.” ― Voltaire
Problem remains: What do we do, if others don't care and violently start killing our group? Do we reward them, throwing away all our weapons and making them our new government?
This question of course currently has a very real real world parallel.
See also: just war theory
People can manage to find justifications for all sorts of atrocities, including destruction of the biosphere.
Just a few days ago, someone replied to one of my comments saying that considering the lives of people who aren't born yet is a completely immoral thing to do, meaning making anyone alive today sacrifice something to protect the planet in 100 years is immoral. So I guess people can find all sorts of justifications.
People are being harmed today, not just hypothetical people born 100 years later.
Of course that is wrong and it is not immoral; but, if you want to do it in the moral way, you have to consider the lives of any living things (plants and animals), including but not limited to humans. Furthermore, there is the consideration of what exactly has to be sacrificed and what kind of coercion is being used (which might be immoral for a different reason); morals is not as simple like they would say.
But, yes people do find all sorts of justifications, whether or not they are any good (although sometimes it is not immediately clear if it is any good, unfortunately).
It is the inevitable outcome of materialism, hedonism, & short-term thinking. I think it's going to get worse before it gets any better.
Just yesterday another HN user told me that always-on DRM is a pure benefit for the consumer, when it comes from Valve Software.
Prime example: animal agriculture. By far the biggest driver of biodiversity loss and nature destruction. Yet people justify it constantly with trivial things like taste, convinience, tradition, etc.
Perhaps also being uninformed? I personally don't know why loss of biodiversity would be bad. Is that common knowledge?
> We're so ready and willing to punish individuals for harm they do to other individuals, but if you get together in a group then suddenly you can plot the downfall of civilization and get a light fine and carry on.
Surely "plot the downfall of civilization" is an exaggeration. Knowing that certain actions have harmful consequences to the environment or the humanity, and nevertheless persisting in them, is what many individuals lawfully do without getting together.
Well said, and yes, this is practically what must happen.
The group of pretty much all humans is such a group because we all conspire to burn fossil fuels. Do you really think a global civilization death penalty is a good idea? That's throwing out the baby with the bathwater.
even back in the 80's
The 1980s is when the issue was finally brought into the political conversation. Shell internal documents go back as far as 1962: https://www.desmog.com/2023/03/31/lost-decade-how-shell-down...
As for science itself: the first scientific theories on greenhouse effects were published in the 1850s -- and the first climate model was published in 1896: https://daily.jstor.org/how-19th-century-scientists-predicte...
No entity can police itself. Not even the police.
Companies, non-profits, regulators, legislative branches of government, courts, presidential administrations, corporate bureaucrats, government bureaucrats, entrepreneurs, regular citizens. They cannot self-police.
That's the motivation for having a system of _checks and balances_[a]: We want power, including the power to police, to be distributed in a society.
---
[a] https://www.britannica.com/topic/checks-and-balances
1970s
https://news.harvard.edu/gazette/story/2023/01/harvard-led-a...
Global warming was understood for almost a century by 1980
Your second point is right, but depressingly it was the 50s instead of the 80s.
>Companies can't really be expected to police themselves.
Companies can't. Employees can. If someone's still working at Meta, they are ok with it.
The problem is that our current ideology basically assumes they will be - either by consumer pressure, or by competition. The fact that they don't police themselves is then held as proof that what they did is either wanted by consumers or is competitive.
Maybe more parallels to tobacco companies. Incredible amount of taxes and warnings and rules forbidding kids from using it are the solutions to the first problem and likely this second one too.
To your point...
1. "The Tobacco Institute was founded in 1958 as a trade association by cigarette manufacturers, who funded it proportionally to each company's sales. It was initially to supplement the work of the Tobacco Industry Research Committee (TIRC), which later became the Council for Tobacco Research. The TIRC work had been limited to attacking scientific studies that put tobacco in a bad light, and the Tobacco Institute had a broader mission to put out good news about tobacco, especially economic news." [0]
2. "[Lewis Powell] worked for Hunton & Williams, a large law firm in Richmond, Virginia, focusing on corporate law and representing clients such as the Tobacco Institute. His 1971 Powell Memorandum became the blueprint for the rise of the American conservative movement and the formation of a network of influential right-wing think tanks and lobbying organizations, such as The Heritage Foundation and the American Legislative Exchange Council."
[0] https://en.wikipedia.org/wiki/Tobacco_Institute
[1] https://en.wikipedia.org/wiki/Lewis_F._Powell_Jr.
"Companies can't really be expected to police themselves."
so does government
No one expects government to police itself.
Government in functioning democratic societies is policed by voters, journalists, and many independent watchdog groups.
Any examples of such societies?
Currently, not sure.
Maybe there is a reference country, at some period in living memory hopefully, we could use as a reference?
France, Germany, UK, Switzerland , Netherlands, Belgium are a few I'm familiar with. There are of course areas of improvement, but in all of those you have strong press that can annihilate politicians for for crimes, as well as more or less working institutions that punish corruption.
Take a look at France, where a former president went to prison. Okay, it got commuted to house arrest (same sentence as a former PM candidate for president), but that's still a pretty serious punishment, especially for a such a high level politician.
> Take a look at France, where a former president went to prison. Okay, it got commuted to house arrest
There is no house arrest, he appealed and is innocent until proven guilty. People stay in prison after appealing in case there is a serious risk of them fleeing the country or in case they present a danger to society, both of these have been deemed low enough
> so does government
The public is supposed to police the government, and replace it if it acts against the public interest.
But now that you mention it, perhaps we should also give everyone an equal vote on replacing the boards of too-big-to-fail corporations
Not so sure about that.
The US-ians voted twice for Trump so far. I have difficulty seeing the good it did for the world , let alone the USA and the US-ians.
Specifically for corporations, giving everyone in the world the power to vote for dismantling Meta (a world mega-corp) might be interesting to see , though.
He is doing good to his supporters, at least as far as they think. He has delivered all sorts of stupid, cruel and self-destructive stuff that they want.
The problem is that they wants have been steered in that direction by decades of cynical media manipulation, but that's just the nature of democracy.
he is choosed to replace the DEI/Woke infested government and succesfully achieve that
true that.. but it seems that they are fostering an environment for SA and even p3dofeelia.. Channel 4 news did a piece on it..
> Meta required users to be caught 17 times attempting to traffic people for sex before it would remove them from its platform, which a document described as “a very, very, very high strike threshold." I don’t get it. Is sex trafficking driven user growth really so significant for Meta that they would have such a policy ?
The "catching" is probably some kind of automated detection scanner with an algo they don't fully trust to be accurate, so they have some number of "strikes" that will lead to a takedown.
There is always a complexity to this (and don't think I'm defending Meta, who are absolutely toxic).
Like Apple's "scanning for CSAM", and people said "Oh, there's a threshold so it won't false report, you have to have 25+ images (or whatever) before it will"... Like okay, avoid false reporting, but that policy is one messy story away from "Apple says it doesn't care about the first 24 CSAM images on your phone".
We don’t know. But as you read from the article, Meta’s own employees were concerned about it (and many other things). For Zuck it was not a priority, as he said himself.
We can speculate. I think they just did not give a fuck. Usually limiting grooming and abuse of minors requires limiting the access of those minors to various activities on the platform, which means those kids go somewhere else. Meta specifically wanted to promote it’s use among children below 13 to stimulate growth, that often resulting in the platform becoming dangerous for minors was not seen as their problem.
If your company is driven by growth über alles à la venture capitalism, it will mean the growth goes before everything else. Including child safety.
Reading Careless People by Sarah Wynn Williams is eye opening here, and it's pretty close to exactly that.
> I think they just did not give a fuck.
It's that people like Zuck and Sandberg were just so happily ensconced in their happy little worlds of private jets and Davos and etc., that they really could not care less if it wasn't something that affected them (and really, the very vast majority of issues facing Meta, don't affect them, only their bonuses and compensation).
Your actions will lead to active harm? "But not to me, so, so what, if it helps our numbers".
Of course it's not. We could speculate about how to square this with reason and Meta's denial; perhaps some flag associated with sex trafficking had to be hit 17 times, and some people thought the flag was associated with too many other things to lower the threshold. But the bottom line is that hostile characterizations of undisclosed documents aren't presumptively true.
I just hope that in 100 years time, people will be shocked at the prevalence of social media these past 2 decades
I predict that in much sooner than 100 years social media will be normalized and it will be common knowledge that moderating consumption is just as important as it is with video games, TV, alcohol, and every other chapter of societies going through growing pains of newly introduced forms of entertainment. If you look at some of the old moral panic content about violent video games or TV watching they feel a lot like the lamentations about social media today. Yet generations grew up handling them and society didn’t collapse. Each time there are calls that this time is different than the last.
In some spaces the moral panic has moved beyond social media and now it’s about short form video. Ironically you can find this panic spreading on social media.
We moderate consumption of alcohol, sugar, gambling, and tobacco with taxes and laws. We have regulations on what you can show on TV or films. It is complete misuse of the term to claim a law prohibiting sale of alcohol for minors is ‘moral panic’. It is not some individual decision and we need those regulations to have a functioning society.
Likewise in few generations we hopefully find a way to transfer the cost in medical bills of mental health caused by these companies to be paid by those companies in taxes, like we did with tobacco. At this point using these apps is hopefully seen to be as lame as smoking is today.
Only over the air TV is regulated by the FCC. Films and non broadcast TV are only regulated if they contain obscene content. If anything there was more regulation of film production in the past. Hayes Code etc.
> We moderate consumption of alcohol, sugar, gambling, and tobacco with taxes and laws.
For the US, would it be accurate to put "sex" on there as well?
> Yet generations grew up handling them and society didn’t collapse.
Society did not collapse. That does not mean those things did not have negative effects on society.
I don't think any of those items have had the significance and decisiveness of social media, or have been controlled by billionaires who have corrupted the election systems.
Social media seems far more dangerous and harder to control because of the power it grants its "friends". It'll be much harder to moderate than anything else you mentioned.
"Society didn't collapse" is a very very low bar.
In 100 years time they will be so fried by AI they won't be capable of being shocked. Everyone will just be swiping on generated content in those hover chairs from Wall E.
In Mad Men, we have these little moments of mind=blown by the constant sexism, racism, smoking, alcoholism, even attitudes towards littering. In 2040 someone's going to make a show about the 2010s-2020s and they'll have the same attitude towards social media addiction.
Not only social media but addiction to phones too. The impact on kids and teenagers is well documented by now.
Where are the parents when you need them?
We need to boycott Meta. Otherwise, social media will destroy our children.
"Priorities" quote: Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.”
You need to be careful with those arguments because you can fall into the trap of "think of the children" for everything. As an example, I doubt any of us would appreciate any novelist of not focusing on saving children more than on writing books...
But I get what you are saying.
> You need to be careful with those arguments because you can fall into the trap of "think of the children" for everything
In a text message in 2021, Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.” Zuckerberg also shot down or ignored requests by Clegg to better fund child safety work.
Fair point, but the fuller context is absurd—the OP's rendering is correct in tone and emphasis.
OK, the context makes it clearer, right. Thanks.
"It is difficult to get a man to understand something, when his salary depends upon his not understanding it." - Upton Sinclair
HN has seen this quote many times; tech workers willfully or naively ignore the harm their contributions cause as long as the life changing paychecks keep coming, letting them pretend that they are too far removed from the damage to be responsible.
Then comes the classic post “I’m leaving FAANG, so brave of me <quiet-part>funded entirely by the same extraction and harm I once insisted I didn’t see.</quiet-part>"
I quit Facebook in the early to mid 2010s, well before social media became the ridiculously dystopian world it is today.
Completely coincidentally, I had quit smoking a few weeks before.
The feelings of loss, difficulty in sleeping, feeling that something was missing, and strong desire to get back to smoking/FB was almost exactly the same.
And once I got over the hump, the feelings of calm, relaxation, clarity of thought, etc were also similar.
It was then that I learnt, well before anyone really started talking about social media being harmful, that social media (or at least FB…I didn’t really get into any other social media until much later), was literally addictive and probably harmful.
https://www.pnas.org/doi/10.1073/pnas.1320040111
In 2014, Facebook published a paper showing how they can manipulate users’ emotions with their news feed algorithm.
Facebook ran this test on 700k users without consent.
I deactivated my account the day I read that paper and never looked back.
I never really liked fb or any other big application that much, so kicking them after 2016 was not that bad, but I used to be heavy user or forums and kicking some of them felt pretty similar to kicking tobacco back in the day.
We are super social insane monkey creatures that get high on social interaction, which in many ways is a good thing, but can turn into toxic relationships towards family members or even towards a social media application. It is not very dissimilar how coin slot machines or casinos lure you into addiction. They use exactly the same means, therefore they should be regulated like gambling.
I quit Twitter/X about a month ago. Had the exact same feeling.
That's interesting. When I quit Facebook after years of heavy use, I felt no better or worse.
The News Feed killed the positive social interaction on the site, so it had essentially become a (very bad) news aggregator for me.
I wouldn’t say I felt better.
Which is why I found it so comparable to quitting smoking.
A smoker doesn’t feel “better” after quitting smoking. Even over a decade after having quit I bet if I smoked a cigarette right now I would feel much nicer than I did right before I smoked it. However, I would notice physiological changes, like a faster heart rate, slight increase in jumpiness, getting upset sooner, etc.
Quitting FB was similar. I didn’t feel “better”, but several psycho-physiological aspects of my body just went down a notch.
So does this apply to all social medias? (Threads, X, Bluesky, IG, etc) how come they didn’t have this evidence from their users well? Or maybe they didn’t bother to ask..
I suppose the harm from social networks is not as pronounced (since you generally interact only with people and content you opted to follow (e.g. Mastodon).
The harm is from designing them to be addictive. Anything intentionally designed to be addictive is harmful. You’re basically hacking people’s brains by exploiting failure modes of the dopamine system.
If I remember correctly, other research has shown that it's not just the addictive piece. The social comparison piece is a big cause, especially for teenagers. This means Instagram, for example, which is highly visual and includes friends and friends-of-fiends, would have a worse effect than, say, Reddit.
What about it being addictive by its nature? I find myself spending too much time on HN and there’s no algorithm driving content to me specifically.
I think there’s a difference between something just being a bit addicting and scientifically optimizing something to be addicting. Differences in magnitude do matter because there are thresholds in almost everything where a thing becomes harmful.
Coca leaves can be chewed as a stimulant and it’s relatively harmless, though a bit addictive. Extract cocaine and snort it and it’s a lot more addictive. Turn it into freebase crack and it hits even harder and is even more addictive.
If this is coca leaves, Twitter is cocaine and TikTok is crack.
I had a similar thought. I wonder if any social media on a similar scale as FB/IG would have the same problems and if it's just intrinsic to social media (which is really just a reflection of society where all these harms also exist)
I think group chats (per interest gathering places) without incentives for engagement are the most natural and least likely to cause harm due to the exposure alone.
Social media has become a tool box for powerful people. They use it for public manipulation. Why would any powerful entity do anything against that?
Big oil, big tobacco, big social, there seems to be a clear pattern of burying evidence of negative impacts of their products to satisfy some personal greed. These people are mentally ill and we need to help them.
> In a 2020 research project code-named “Project Mercury,” Meta (META.O), opens new tab scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook and Instagram, according to Meta documents obtained via discovery.
Did they pick people at random and ask those people to stop for a while, or is this about people who choose to stop for their own reasons?
> To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.
I don't think it's even a stretch at this point to compare Meta to cigarette companies.
Complete with the very expensive defence lawyers, payoffs to government, and waxing poetic about the importance of the foundation of American democracy meaning they must have the freedom to make toxic, addictive products and market them to children, whilst they simultaneously claim of course they would never do that.
Journalist love that study but tend to ignore the likely causal reason for the improved outcomes, which is that users who were paid to stop using Facebook had much lower consumption of news and especially political news.
Teens don't care about politics for the most part and have absolutely horrible outcomes from social media
That's a pretty good reason to leave FB though.
What does political news have to do with loneliness and social comparison?
Cigarettes aren't the only source of smoke
At minimum, Stricter and revised gambling laws should certainly apply to attention consumption where recommendation algo's are used.
[dead]
Meanwhile I'm sitting here deliberating for the 200th time to delete my Whatsapp account, meaning I won't take part in group chats with my friends anymore ... in the end I won't delete it and next up is deliberating for the 201st time to delete my Whatsapp account ...
I deleted it a couple of years ago, leaving all group chats. Haven't looked back since.
Everyone that is important to me (and not a slave, nor enslaver of their friends) is on Signal anyways
One of the worst outcomes of the last 20 years is how Big Tech companies have successfully propagandized us that they're neutral arbiters of information, successfully blaming any issues with "The Algorithm" [tm].
Section 230 is meant to be a safe harbor for a platform not to be considered a publisher but where is the line between hosting content and choosing what third-party content people see? I would argue that if you have sufficient content, you could de facto publish any content you want by choosing what people see.
"The Algorithm" is not some magical black box. Everything it does is because some human tinkered with it to produce a certain result. The thumb is constantly being put on the scale to promote or downrank certain content. As we're seeing in recent years, this is done to cozy up to certain administrations.
The First Amendment really is a double-edged sword here because I think these companies absolutely encourage unhealthy behavior and destructive content to a wide range of people, including minors.
I can't but help consider the contrast with China who heavily regulate this sort of thing. Yes, China also suppresses any politically sensitive content but, I hate to break it to you, so does every US social media company.
Your solution to the government putting pressure on social media companies to censor is to give the government more power over them by removing section 230?
I'm saying social media companies are using Section 230 as a shield with the illusion of "neutrality" when they're anything but. And if they're taking a very non-neutral stance on content, which they are, they should be treated as a publisher not a platform.
Of course they did. Anyone not blind to what is going on knows this, of course. It is merely a matter of proving it in front of the law. That's all this is about. It's no longer about the question whether or not they acted despicably.
I doubt serious consequences will follow this time, as there haven't been following serious consequences all the previous times Meta/Facebook has been found guilty of crimes. However, it can serve as one more event to point out to naive people, who don't want to believe the IT person, that FB/Meta is evil, because they don't want to give up some interaction on it, or some comfort they have, using FB/Meta's apps or tools. I think it's a natural tendency most of us have. We use something, then we want extra good proof when someone claims that thing is bad, because we don't want to change and stop using the thing. Plus FB/Meta will do anything they can, to make people addicted to their platforms.
Someone should include the owl really meme into this process
"Social media harm" sounds like one of these nebulous things which has no real definition
"Social media was a mistake, just like the internet" oh ok so we should just give up our gmails and reddits and everything because people insist on the widest possible swathe of categories
But actually when it comes to Metabook... I don't think Zuckerberg cares about anybody, and more to the point they refuse to give you a chronological service just for starters
People have died and their friends haven't known about it because the algorithm never showed them. People have noticed messages they've got from people trying to get in touch with them years later, because Zuck feels you should be using Facebook all the time, not email https://news.ycombinator.com/item?id=4151433
When your company is run by a megalomaniac this is what you get...
I already knew Zuck was a piece of shit before readying Careless People but holy shit.
> In a text message in 2021, Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.”
> Zuckerberg also shot down or ignored requests by Clegg to better fund child safety work.
"It can make quite a difference not just to you but to humanity: the sort of boss you choose, whose dreams you help come true." -Vonnegut
Meta delenda est.
Ads delenda est
Relevant. CEOs lie. Boldly.
https://www.youtube.com/watch?v=e_ZDQKq2F08
I remember reading case studies on the Tylenol recall.
Meta leadership has had opportunity after opportunity to do the hard thing, and be the force for good in a manner that they can live with.
Even more frustrating - the decision making shares give Zuckerberg control.
They should have been shutdown and all the C-Level exec arrested after Cambridge Analytica. The weapons grade psyops they used too get Trump elected are crimes against humanity.
Meta is Zuck. Zuck is bad. Accept it everyone. Why people hate Elon Musk but not Zuck is beyond me. Zuck has done real harm as well, some of it worse than Musk.
Who is surprised? Fuck zuckerberg
Unironically, Zuckerberg and the rest of the top brass of Meta should be in the Hague.
[dead]
[flagged]
The usual reminders apply: you can allege pretty much anything in such a brief, and "court filing" does not endow the argument with authority. And, the press corps is constrained for space, so their summary of a 230-page brief is necessarily lacking.
The converse story about the defendants' briefs would have the headline "Plaintiffs full of shit, US court filing alleges" but you wouldn't take Meta's defense at face value either, I assume.
https://www.lieffcabraser.com/pdf/2025-11-21-Brief-dckt-2480...
This is a weird comment to make, given that they're citing "Meta documents obtained via discovery."
Doesn't seem like you're making this comment in good faith, and/or you're very invested in Meta somehow.
Every time they contact me I tell Meta recruiters that I wouldn't stoop to work for a B-list chucklehead like Zuck, and that has been my policy for over 15 years, so no.
You're not speaking to a jury. Regular people just living their lives only have to use their best judgment and life experience to decide which side they think is right. We don't need to be coerced into neutrality just because neither side has presented hard proof.
Sad thing is that nothing will come out of this. Meta will go scott free.
These discussions never discuss the priors, is this harm on a different scale then what preceded it? Like is social media worse than MTV or teen magazines?
It is a completely different scale.
I loved MTV as a kid but it was as different to social media as can be.
Half the time you would turn it on and not like the video playing then switch the channel. Even if you liked the video that was playing, half the time the next video was something you didn't like so you would switch the channel.
Now imagine if MTV had used machine learning to predict the next video to show me personally that would best cause me to not change the channel.
That is not even really a different scale but a different category.
Why does it matter? We can’t go back and retroactively punish MTV for its behavior decades ago. Not to mention we likely have a much better understand of the impact of media on mental health now than we did then.
The best time to start doing the right thing is now. Unless the argument here is “since people got away with it before it’s not fair to punish people now.”
What policy proposals would you have made with respect to MTV decades ago, and how would people at the time have reacted to them? MTV peaked (I think) before I was alive or at least old enough to have formative memories involving it, but people have been complaining about television being brain-rotting for many decades and I'm sure there was political pressure against MTV's programming on some grounds or another, by stodgy cultural conservatives who hated freedom of expression or challenges to their dogma. Were they correct? Would it have been good for the US federal government in the 80s and 90s to have actually imposed meaningful legal censorship on MTV for the benefit of the mental health of its youth audience?
I think passively watching something on television is very different from today’s highly interactive social media. Like instagram is literally a small percentage people becoming superstars for their looks and lifestyles and kids are expected to play along..
> people have been complaining about television being brain-rotting for many decades
This was a broad, simplified, unsupported claim that cannot be compared to the demonstrable, well-studied impacts of social media on people’s - especially young people’s - minds. They are not even remotely on the same level.
If we want to debate MTV specifically yes there are well studied, proven impacts of how various media can make people think of their own bodies and lives etc. that can be harmful. But again it’s not remotely to the same degree. Social media can be uniquely poisonous. There are a myriad of studies out there that confirm this but I’m happy to link some if you want me to.
If somebody wanted to it would probably not be very difficult to write an article all but conclusively proving that Instagram is more harmful than MTV.
It matters because it points towards a common failure mode which we've seen repeatedly in the past. In the 1990s, people routinely published news articles like the OP (e.g. https://www.nytimes.com/1999/04/26/business/technology-digit...) about how researchers "knew" that violent video games were causing harm and the dastardly companies producing them ignored the evidence. In the 1980s, those same articles (https://www.nytimes.com/1983/07/31/arts/tv-view-the-networks...) were published about television: why won't the networks acknowledge the plain, obvious fact that showing violence on TV makes violence more acceptable in real life?
Is the evidence better this time, and the argument for corporate misconduct more ironclad? Maybe, I guess, but I'm skeptical.
My response I gave to the other person basically covers how I’d respond to this:
https://news.ycombinator.com/item?id=46023313
Plus if we don’t do anything about it now, rohan_2 twenty years from now will use the same argument about whatever comes next!
I don’t understand why things like social media are meant to be regulated by the government.
Isn’t religion where we culturally put “not doing things that are bad for you”? And everyone is allowed to have a different version of that?
Maybe instead of regulating social media, we should be looking at where the teeth of religion went even in our separation of church and state society. If everyone thinks their kids shouldn’t do something, enforcing that sounds like exactly what purpose religion is practically useful for.
Well, the more scientific and pluralistic our society becomes the more religion is necessarily sapped of its ability to compel behavior. If you lived in 13th Century France the Catholic Church was a total cultural force and thus could regulate behavior, but the very act of writing freedom of religion into law communicates a certain idea about religion: its so unimportant that you can have whatever form of religion you want.
In any case, one ought to distinguish between "You shouldn't do things which are bad for you," and "You shouldn't do things you know are bad for others." Especially, "Giant corporations with ambiguous structures of responsibility shouldn't be allowed to do things which are bad for others."
> If everyone thinks their kids shouldn’t do something, enforcing that sounds like exactly what purpose religion is practically useful for.
Alternatively, being raised well by their parents and the Community around them.
Religion is not a needed component of that.
Are you serious? People don't need religion to be moral. If what I see from religion these days is any indicator, I am extremely happy we kept our kids far far far away from it. From all of it. I will concede that not all religion is bad, but quite a lot of it is grift at best and cleverly disguised totalitarianism at worst. Many religious figures have absolutely no problem talking publicly about their "diety-given" right to dominate and control the lives of others for their own personal gain. I don't see how that fits inside any accepted definition of morality.
The Spanish Inquisition has entered the chat.
There are certain statements that should make you wary of study findings.
People who x reported y is one of those phrases.
“people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,”
This is the same argument you see in cosmetic advertising as "Women who used this serum reported reduction in wrinkles"
If the study has evidence that people who x actually shows y, It would be irresponsible to not say that directly. Dropping to "people reported" seems like an admission that there was no measurable effect other than the influence of the researchers on the opinions of the subjects.
Mental state can be difficult in this respect because it is much harder to objectively measure internal states. the fact that it is harder to do, doesn't grant validity to subjective answers though.
I was once part of a study that did this. It was fascinating seeing something that appeared to have no effect being written up using both "people reported" and "significant" (meaning, not likely by chance, but implying a large effect to the casual reader).
Dude, who cares about study design and methodological validity! Let's just burn Meta down and put Zuck to jail! /s
What you a saying is valid criticism of the study but people here already made up their mind, so they downvote.
Another point to add is that 1 week is way too short - assuming there is an effect it might disappear or go in reverse after 1 month.
To all downvoters: if you think of yourself as smart rational people, please just use search/AI to see for yourself whether there is high quality evidence of _causal_ impact of social media on kids/mental health. The results are mixed at best.
Interestingly the post is climbing it's way back to zero.
I find the downvote without counterargument to be an odd response to a good faith post. It seems like it would strengthen the argument if the message they send is "I don't have a counter to this but I don't like it and I don't like that others will see this point of view"
I have come to realise that I have a much higher threshold when it comes to upvoting, downvoting, or rating things. It seems like a lot of people freely upvote, like, heart, or downvote without a care. We live in a world where a 4.8 star rating (comprised entirely of an aggregate of zero and five star ratings) is considered a concern. So I try not to be bothered by it, but I'm pretty sure subconsciously a downvote hurts more than someone saying "I disagree"