"Fear and Liking On Facebook": An Essay on Social Media
Beware that cute kitten in the video is out to get you!
When you 'like' a cute cat video, fill in a quiz or complete a personality test through Social Media you are sending an unseen world of advertisers, analysts and data miners personal information about yourself that you would most likely never give out willingly. Recent scandals such as the one involving data company Cambridge Analytica and Facebook have brought this to the fore and shed some light into the murky world of personal data online. Much was made of the type of data that Cambridge Analytica had access to and how they then used that data to potentially influence voters' decisions in the US Presidential elections of November 8th 2016; however they are only one of thousands of similar companies that collect, analyse and redistribute data in this way. At the heart of the matter is the coupling of your Smartphone: an always on supercomputer, with powerful algorithms which are buried within the apps and software your Smartphone uses. This combination of powerful hardware and sophisticated software means that data companies now probably know more about you than you know about yourself.
Whether you like it or not your Smartphone is collecting and recording data about you and transmitting this data to your network provider and a whole host of other 3rd parties. This data transmission happens even if you switch off all the background refreshing of your apps, elect not to automatically download updates or emails and prevent auto-connecting to 'known' Wi-Fi networks. Your network service provider is collecting information about you and your life on an almost second by second basis and this is not as anonymous as you think. Obviously your service provider knows who you are -you pay them money but what about other companies that they give some of your anonymised data to? Network providers are required, by law, to strip out some so called personally identifiable information about you but this is actually very limited and most of the data that they collect is up for sale.
In March 2013 researchers from Massachusetts Institute of Technology (MIT) studied 15 months of 'anonymised' data donated by an undisclosed European network provider relating to 1.5 million of its customers. The researchers were given access to the customers' location streams but restricted to only having updates on an hourly basis. They were never told at any stage who these customers were. The MIT team found they only needed 4 approximated places and times to enable them to uniquely identify 95% of those 1.5 million customers! This demonstrates that your mobile trace is VERY personal and is actually very easily identifiable. This highlights the potential of your Smartphone to be used as a surveillance tool has grown beyond even the imaginings of George Orwell in his book 1984. In coining the phrase "Big Brother is watching you" Orwell was picturing a society where there was an overt camera in every room through which the state snooped on the individual. But even he could have not imagined or envisaged a world eclipsing his own creation where the people willingly carry a camera and data transmitting device twenty four hours a day seven days a week.
Fear and 'Liking' on Facebook
So with the uncovering of the Cambridge Analytica scandal social networks have been exposed. We can no longer pretend that they are socially neutral platforms as we are in fact the commodity and our data can be used by 3rd or 4th parties to pursue or forward their own agendas. So how did this happen? How does a seemingly inert place for posting pictures and messages to your friends end up being a data gold mine for companies to exploit for commercial and political purposes?
Facebook is what is known as a 'data supernode' with 2 billion active users every month sharing 60000 GB of data through its servers every second, that's 5184000000 GB of data every day! When you take into account WhatsApp and Instagram around 80% of all social media traffic on the web is directed through Facebook servers. This gives Facebook a huge resource and when you 'accept' its terms and conditions it then OWNS all the data you have produced whist using their platform. That picture of you at a bar on holiday drinking vodka jellies; that's owned by them as soon as you upload or post it onto your wall. Now that may not seem like a big deal but with every post, upload, like or dislike and check-in, you are building an exact construct of your personality. The insidious nature of the Terms and Conditions mean that because Facebook own it they can sell it to anyone they want, when they want to.
Facebook, like most companies, offers an all or nothing approach to data sharing. To access their services you must agree to hand over your data and this is usually outlined in the huge terms and conditions file which accompanies the 'agree' button. These terms and conditions can (in the case of Facebook) run to over 13 000 words, which is around the same length as Shakespeare's play The Taming of the Shrew! This is then coupled with a clarity index which is around Degree level even though the minimum age for signing up to these services is 13 years old. An experiment carried out by the Institute for Privacy found that if you were to read the Terms and Conditions of the main social media companies' apps one after the other then it would take nine hours of continual reading.
As well as handing over this data, Facebook and others limit the amount of control we have over how it is used or collected. The model is self-preserving and encourages web monopolies by exacerbating the effects of so called 'vendor lock-in' where we are in fact tied into a service. To illustrate this can you think of a viable mainstream alternative to Facebook?
So, are we inadvertently sleep-walking into a world where people we don't know, actually know more about us than we do ourselves? A recently adopted term is that we now live in a "filter bubble" and that by allowing companies like Facebook, Google, Apple and Microsoft to filter near infinite content (therefore reducing our choices and focussing our attention) we have turned a blind eye to the amount of personal data we are happy for these companies to own and use. After all their services are free! The rise of free services and apps has enabled companies to disguise their intended use of your data. We are in fact selling our data and information in order to pay for these services but most users are not fully aware of what this means. As the old saying goes "If the service is free -then you are the product!" If you have downloaded 'Free' apps such as Flashlight or Mirror to your device, which are useful if you want to check your appearance when out or need to find your way home from the pub in the dark (ok so I live in the countryside), then what do the developers and suppliers get in return? The answer is location data.
As with the research mentioned earlier location data is very valuable. Location data can be used to generate an effective map of the places you visit and shop and this can then be sold to retailers. When this is then linked to other data, companies can build sophisticated demographic models and even target and tailor advertising to you. This data is not just being harvested when you use the app either, it can run in the background. Even if you only use the app once in a while, information can be collected and sent. Developers rely on most people forgetting or not knowing how to restrict these apps in their security settings and therefore they receive vast amounts of data that they can then sell on. Everyone who is on social media is now becoming individualised with targeted, continuously adjusted stimuli on a constant basis, and this is due mostly to the advent and development of sophisticated algorithms. The algorithm is the single most important tool to social media companies and producers of apps that we use on our smartphones and tablets every day. An algorithm can be used to filter information, collect data or direct a user to specific content and at times it can be used for all these purposes at once. You won't see an algorithm and you won't be asked by it whether you mind if it tells another company about you, and perhaps most worrying, it won't ask you for your permission when it connects to other algorithms embedded in other apps that you may have, which also harvest information about you.
Algorithms are now being developed with artificial intelligence systems. They are becoming increasingly more powerful and potentially much more intrusive, requiring greater amounts of data to crunch in order to analyse an ever-broadening range of behaviours. The 'internet of things' is a good illustration of this. Here, previously independent technology, such as fridges, heating systems, doorbells, and even washing machines are being connected through apps and the internet. Through these connections, which are often facilitated through smartphones, algorithmic programmes are now able to build up a picture of even more complex behavioural patterns. Our voracious appetite for spending time on-line, either directly or indirectly, supplies ever increasing amounts of raw data to these systems and makes them more effective, and arguably much more intrusive.
Algorithms are used by social media when you 'like', comment or buy products and services. It is these algorithms which prioritise showing us information and products that it thinks we want, based on our activities. This is why when you like a friend's post which contains large underwear you suddenly get bombarded with adverts for the latest in shapeware! In the beginning the main drive for these algorithms was just this; to target us with advertising. It is now understood that sophistication of these algorithms means that this is no longer just direct marketing but subtle, continuous behavioural modification. In mainstream advertising the subliminal advert has been banned for decades, but isn't the constant suggestion and reinforcing that is being employed on social media exactly that? It achieves the same objectives as subliminal advertising yet we don't recognise it as such.
Social media algorithms in particular are specifically designed so that the single most important metric for them is to generate engagement. This can be engagement with others, the system as a whole, or with other companies and organisations which it thinks you may be interested in (based on your behaviours). As the use of algorithms increased on social media, they have been linked to a rise of negativity such as misogyny nationalism and radicalisation in online communities, which has inevitably percolated into society as a whole. As algorithms work within computerised systems they generate engagement with very rapid response times and as everyone within the system is using algorithms designed to create the effect they desire as rapidly as possible, you get viral trends. These trends can of course be benign or have altruistic motives, but the algorithms do not differentiate between these and the negative elements. If we were to look at political movements such as the 'Arab Spring' or campaigns such as #metoo, then it is obvious that a great deal of positive outcomes can come out of social media, however there is a sting in the tail. When movements or campaigns go 'viral' the pace and volume of posting, 'liking' and sharing inevitably creates its own echo chamber effect where spirals of increasing polarisation build momentum. This is fed and fueled by the nature of the algorithms which are running unseen in the background. This can lead to a wave of oneupmanship and polarisation which manifests itself in "it happened better or worse for me", or in moderate ideas being swamped with extreme ones that rapidly creates negativity from the once positive idea. The empowerment that negative trends such as these give, coupled with the superficial anonymity that social media often allows, means more extreme views and opinions are driven into the mainstream. This then makes others uncomfortable or even angry and is likely to foster and elicit an emotional response which in turn generates more traffic, likes, retweets, engagement and therefore more data. When this reactionary human interaction combines with the systems rapid algorithmic reflex, people are directed to where the most traffic (data) is. The negativity is then propagated and it becomes a whirlpool of self-sustaining antipathy. Since negativity works better on these platforms (due to the algorithmic processes), the same tools that were used for promoting good are more effectively used for those who come out in response or opposition. This shows us that although the algorithmic empowerment of humanity is indeed a powerful tool, it can also be an outright dangerous one. Nastiness, outrage and extreme views are in general the most effective ways to increase engagement and it's due to this fact that ISIS gets more 'bang for its buck' than the Arab Spring movement.
This mechanism for propagating negativity has led to an unexpected side effect. As the algorithms do not differentiate between positive and negative streams, and that negative streams are statistically the ones generating more traffic, mainstream advertising is channeled into these feeds without the direct intention of the companies producing them. This can have the effect of lending credence and credibility to the negative feed. This has been the focus recently for a number of we11known companies who pulled their advertising from Facebook and Google due to inadvertent placement of their companies advertising.
Fake News
Another consequence of the algorithms being employed on social media which, generate engagement over any concern for accuracy, is the rise of so called fake news. As discussed previously, Facebook, Twitter and other social media platforms are viral hotbeds due to the algorithmic processes they utilise. It is also important to understand that these processes do not differentiate between news stories which originate from a reputable agency or from a conspiracy website. It can therefore look like a story which has gone viral is sourced and checked in the same way that a traditional news story would have been. On Facebook in 2017 the top twenty fabricated stories netted more engagement than real stories from sources that actually did factual reporting. This statistic gains even more significance when you factor in research carried out by Pew which found that 44% of Americans cite Facebook as a reliable news source! Of course mainstream media is not entirely blameless as they often misreport or report a story for reasons of amplification but social media has a massive; and increasing, influence on what people are 'led' to believe mainly due to its at hand availability.
So if fake news is such an issue on platforms such as Facebook, why do they not do something about it? Facebook responded to a recent Chanel 4 Dispatches documentary reporting on the issue of fake news and hate speech on its platform by stating that it was aiming to employ over 7500 content reviewers, which means, suspect articles and stories will be checked by a human team who reference external agencies such as Politifact. This is a step forward, however when one considers the 1.4 billion daily active users on Facebook, posting in over 100 languages, often multiple times a day, it's hard to see how manpower alone will be able to moderate the problem. Cynically it could be said that Facebook don't want to eradicate fake news stories as its one of the best sources of engagement for the platform.
The only way to tackle the issue in an effective way is to redeploy the algorithms and change their operating parameters. Software developed by Kate Starbird, a professor of design engineering at MIT, was able to distinguish with 88% accuracy if a tweet on Twitter was spreading rumour due to its configuration and its 'heat signature'. She has also developed algorithms which can analyse trends on Facebook. If these types of algorithms where then combined with the removal of likes, re-tweets or upvotes on Reddit, then social media stops becoming a popularity contest. In these circumstances posts can be assessed by what they say and their content rather than by the fact that they have 500,000 re-posts.
Many analysts have highlighted what a huge impact fake news can have on society and no event illustrates this more than the proliferation of fake news and orchestrated Tweeting, posting and sharing of these stories during the US Presidential elections. It has been demonstrated that fake anonymous accounts were generating huge amounts of engagement, which was targeted at potential voters in marginal swing states. This had a disproportionate effect on the election result as only 80000 voters in these states decided the entire election result. Much weight has been placed on the influence that this fake news had on voting patterns but it is not just about undermining democracies. Earlier this year the UN blamed Facebook directly for spreading and allowing to be spread hate speech and news which contributed directly to the genocide of Rohingya Muslims.
So pressure on Social media companies is growing and they are now recognising that their time as disruptors may be coming to an end. Mark Zuckerberg (Facebook founder and majority shareholder) said at US congressional hearings held in March: "We have a responsibility to not just build tools, but make sure these tools are used for good". If implemented, this ethos will be a major change for a company who has always viewed itself as merely providing a platform for others to post onto and to use that content however they please. This change in direction is important, but even more important is for us to recognise the pitfalls and limitations of using social media and be aware of what we are actually signing up to. There is no magic technological fix for all of the issues arising from our interconnected world, but we must become conscious of how the system works when interacting with it.
Note: This article had originally been submitted to a technical magazine for publication, but never made it. It was sent by post to @barge, who OCR'ed and then uploaded it. This introductory post by @themightysquid may help to clarify (a little) why it is necessarily so. Comments are most welcome, but they cannot be answered by TMS immediately, as any correspondence will take place by snail-mail :). Thanks for your patience and thanks for reading!
Graphics: The FB ' f ' was sketched by @themightysquid: - Other stuff is by @barge.
Good read. But i'm also the guy that puts everything into BTC hoping to get rich.
I'm so sad I missed this before it got too old to resteem!
Oh ASE, I'm so sorry I didn't see this and wey hey!!! and thanks for the publicity 🔆 .......I'm visiting this page so I can print it out and send it to TMS (along with the comments :). He's got a sequel article to this one coming soon, once I pull my finger out and post it on his behalf 🔆.
Congratulations @themightysquid! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :
Award for the number of upvotes
Click here to view your Board of Honor
If you no longer want to receive notifications, reply to this comment with the word
STOP
Do not miss the last post from @steemitboard: