Home Social Media New Report Finds That the Majority of Parents Don’t Update Privacy Controls for Kids in Social Apps
Social Media

New Report Finds That the Majority of Parents Don’t Update Privacy Controls for Kids in Social Apps


Here’s a universal truth worth noting in social media assessment: While social platforms provide various tools to help users curate, manage, and customize their in-app experience, most people don’t ever use them, no matter how helpful or beneficial they may be.

We’ve seen this repeatedly, from ad controls, to feed preferences, to privacy tools. Social platforms now provide a breadth of options to manage your experience, in various ways. But for the most part, people don’t enact them.

That’s especially important to note when considering younger users, and protecting them from unwanted exposure in social apps, with a Washington Post report today confirming that only a fraction of parents ever implement control options to manage their kids’ usage of social apps.

As per the report:

“By the end of 2022, less than 10 percent of teens on Meta’s Instagram had enabled the parental supervision setting, according to people familiar with the matter who spoke on the condition of anonymity to discuss private company matters. Of those who did, only a single-digit percentage of parents had adjusted their kids’ settings.”

You would think that this would be a bigger focus, but clearly, it’s too time consuming, or too technically difficult, for most parents to bother with. But as noted, that’s also the same with other control options and tools available in-stream

For example, various research reports in the past have shown that the vast majority of social media users never update their privacy settings, while a survey just four years ago, in 2019, showed that 74% of Facebook users, even then, did not know that the app kept a record of their traits and interests.

Even major news stories around privacy and security don’t have the impact that you’d expect.

In 2018, following the Cambridge Analytica scandal, which revealed how people’s personal Facebook data had been used in a (seemingly) complex program designed to sway voting preferences, based on inherent behaviors and traits, only 54% of Facebook users made any change at all to their privacy settings in the app.

Despite the tools being readily available, most people just go with the flow. Which is a concern for parents, and teen users who could be putting themselves at risk, but it’s also worth noting in regards to broader usage behaviors, and how people engage with different elements in social apps.

Most people use the new “For You” feeds, which now feature more and more recommended posts, as opposed to only the updates from the profiles that you follow. Most people don’t update their privacy settings, protecting their personal info. Most people don’t bother to opt out of certain ad categories.

The exception to this would likely be Apple’s iOS 14 update, which prompted all iOS users directly as to whether they want to allow certain apps to track their activity.

Apple

The automated, upfront alert, and the wording of the two options on screen, saw many users cut off data tracking for many apps, which has had a major impact on the broader digital ads market.

But it is worth noting that with every update that the platforms make, with every new tool that enables them to opt out of something, that enables people to change their display, to update their settings, potentially cutting them off from certain elements. The platforms enact these changes knowing that it’s very unlikely to cause any major usage impact.

Because most people simply won’t bother, so next time you see that Meta has implemented some new privacy setting, or some new tool that enables people to switch off data tracking, etc. Just know that a lot of this is PR, designed to appease regulators. They know that nobody’s actually going to use them.

Which is worthy of note this week, as representatives from Meta, Snapchat, TikTok, and X front a Senate Judiciary Committee hearing which explores the rising risk of child sexual exploitation online.

Of course, the platforms can’t make people take more action on these elements from an individual perspective. But it’s interesting to note, as with the Apple example, that there are more effective ways to prompt users to take direct action, so they could push more responsiveness on this front, if they chose.



Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Social Media

ChatGPT Gets Advanced Data Analysis Capacity

While the latest slew of generative AI tools have wowed people with...

Social Media

Reddit Adds New Options to Improve its ‘Ask Me Anything’ Sessions

Reddit’s “Ask Me Anything” (AMA) sessions have long been one of the...

Social Media

Report Shows Publisher Referrals From Facebook Have Declined by 50%

It seems that Meta is indeed serious about its shift away from...

Social Media

LinkedIn Shares New insight into Professional Use of Generative AI

Looking to get a better handle on the evolving impact that generative...