When companies or businesses use your data, it uses your personal information, but these companies must follow the dictates on data protection. This pertains to data kept on staff, customers and account holders, for instance when staff are hired, staff documents are executed, and when businesses sell their goods or services, this further includes the use of CCTV.
This could also include keeping customer’ addresses on file, recording personnel working hours, and giving delivery data to a delivery company. This data has to be kept safe, accurate and up to date, and not only that, when information is obtained, especially if it’s personal data, that company or business has to tell a person who that company is and how they’ll use that data, including if it’s being distributed to other parties.
Companies and businesses must tell the person or customer that they have the freedom to see any data that the company holds about them, and correct it if it’s wrong. They further have to right to have their data erased and/or request that their information is not used for specific objectives.
Data on staff that’s accumulated must be protected and safe, with paper documents secured in filing cabinets or set passwords for computer records, and this data should only be kept as long as it’s needed and then disposed of securely after, by shredding, for instance.
Companies must be able to justify monitoring their workers at work because all workers have rights at work and are entitled to be treated justly.
Businesses can’t monitor workers without their knowledge except if it’s to do with part of a particular investigation, and then it must end when the investigation is over and if a company uses CCTV, they must tell people they’re being recorded.
This is normally done by displaying signs, which must be distinctly visible and clear and anyone can request to see images that have been recorded of them, and they must be presented inside 40 days, although you can be charged up to £10.
But then there are the Apps on your phone so that people know where you were last night, and dozens of companies use smartphone location to help advertisers and even hedge funds. They say it’s anonymous, but…
There are millions of dots on the map which track highways, side street and bike trails, each one following the trail of an anonymous cellphone user.
One path tracks someone from a home outside of Neward to a nearby Planned Parenthood, remaining there for more than an hour, another represents somebody who travels with the mayor of New York throughout the day and returns to Long Island in the evening.
Yet another leaves a house in upstate New York at 7 am and travels to a middle school 14 miles away, staying until late afternoon each school day. Only one person makes the journey, a maths teacher and her smartphone goes with her.
The app on her device gathers her location information, which is then sold without her knowledge, and it’s recorded her location as frequently as every two seconds, and according to a database more than a million phones in the New York area collect location data, and even though the identities are not exposed in these records it was easy to connect them to that dot.
The app tracked her as she went to a Weight Watchers meeting and to her dermatologist’s office for a minor procedure. It followed her hiking with her canine and staying at her ex-boyfriend’s home, which is really disturbing.
Because it’s the thought of somebody obtaining those intimate details that you don’t want people to know, and even though numerous consumers know that apps like these can track people’s movements, smartphones have become ubiquitous and technology more accurate, and this is an enterprise that’s snooping on people’s everyday habits and is growing to be more intrusive.
And then we have the Facebook-Cambridge Analytica data scandal which was a major political disgrace in early 2018 when it was reported that Cambridge Analytica had collected the personal data of millions of people’s Facebook profiles without their permission and used it for political advertising purposes.
It’s been described as a watershed moment in the public perception of personal data and accelerated a huge decline in Facebook’s stock price and has called for tighter control of tech companies’ use of personal data.
The illegal harvesting of personal data by Cambridge Analytica was first reported in December 2015 by Harry Davies, a journalist for The Guardian, and he reported that Cambridge Analytica was working for the United States Senator Ted Cruz using data collected from millions of people’s Facebook accounts without their permission.
Of course, Facebook declined to comment on the story other than to say it was investigating, but the scandal ultimately exploded in March 2018 with the appearance of a whistle-blower, an ex-Cambridge Analytica employee Christopher Wylie.
More than $100 billion was knocked off Facebook’s market capitalisation in days and politicians in the US and the United Kingdom demanded explanations from Facebook CEO Mark Zuckerberg, and the scandal ultimately steered him to agree to testify in front of the United States Congress.
The scandal was important enough for stimulating public debate on ethical standards for social media companies, political consulting organisations, and politicians, and consumer advocates called for more comprehensive consumer protection in online media and the right to privacy as well as misinformation and propaganda.
Aleksandr Kogan, a data scientist at Cambridge University, developed an app called “This Is Your Digital Life” sometimes stylised as “thisisyourdigitallife“.
He presented the app to Cambridge Analytica, Cambridge Analytica, in turn, established an informed consent process for research in which several hundred thousand Facebook users would agree to complete a survey only for educational use.
However, Facebook’s design enabled this app not only to accumulate the personal information of people who consented to take part in the survey, but also the personal data of all the people in those user’s Facebook social network, and in this way, Cambridge Analytica managed to collect data from millions of Facebook users.
And it was announced that dataset had included data on 50 million Facebook users, and Facebook later verified that it actually had data on up to 87 million users, with 70.6 million of those people from the United States.
Within the United States, Facebook estimates that California was the most affected US state, with 6.7 million impacted users, followed by Texas, with 5.6 million, and Florida, with 4.3 million, and while Cambridge Analytica stated that it only accumulated 30 million Facebook user profiles, Facebook determined that the number was about 87 million profiles.
Facebook posted a message to those users thought to be affected, stating that the data possibly included one’s public profile, page likes, birthday and city, but some of the app users gave the app authorisation to access their News Feed, timeline, and messages.
The information was detailed enough for Cambridge Analytica to build psychographic profiles of the subjects of the data. The information further included the locations of each person, and for a given political campaign, the information was accurate enough to build a profile which suggested what kind of advertisement would be most effective to influence a particular person in a particular location for some political event.
And in December 2015, The Guardian reported that Cambridge Analytica used the data at the behest of Ted Cruz. Cambridge Analytica further assisted with President Trump’s campaigns, and on March 17, 2018, The Observer, Guardian’s sister paper published online on theguardian.com and The New York Times disclosed the story concurrently.
The Observer worked with Christopher Wylie, a former employee of Cambridge Analytica, for more than a year before bringing in The New York Times to report the story in the United States, and the New York Times reported that as of March 17, 2018, the information was still accessible online.
Some, such as Meghan McCain has drawn an equivalence between the use of data by Cambridge Analytica and Barack Obama’s 2012 campaign, which, according to Investor’s Business Daily prompted followers to download an Obama 2012 Facebook app that, when activated let the campaign collect Facebook data both on users and their friends.
PolitiFact, a fact-checking organisation had rated this as Half true in Obama’s case, and direct users knew they were giving over their data to a political campaign, whereas with Cambridge Analytica users believed they were only taking part in a personality quiz for academic purposes, and while the Obama campaign used the data to have their followers contact their most persuadable friends, Cambridge Analytica targeted users, friends and lookalikes directly with digital ads.
Political events for which politicians paid Cambridge Analytica to use data from the data breach included the following 2015 and 2016 campaign of United States politician Ted Cruz.
Facebook director Mark Zuckerberg first atoned for the situation with Cambridge Analytica on CNN, describing it an issue, a mistake and a breach of trust, and in effect, he reminded them of their Right of access to personal data.
Other Facebook officials contended against calling it a data breach, disputing those who took the personality quiz basically agreed to give away their data.
Mark Zuckerberg promised to make adjustments and changes in Facebook policy to stop similar violations, and on March 25, 2018, Mark Zuckerberg published a personal letter in numerous newspapers apologising on behalf of Facebook, and in April they decided to implement the EU’s General Data Protection Regulation in all regions of operation and not only the EU.
Amazon said that they suspended Cambridge Analytica from using their Amazon Web Services when they discovered that their service was collecting personal data, and the Italian banking company UniCredit quit promoting and marketing on Facebook.
The governments of India and Brazil demanded that Cambridge Analytica report how anyone used data from the breach in political campaigning, and numerous regional government in the United States had suits in their court systems from citizens affected by the information breach.
On April 25, 2018, Facebook released its first earnings report since the scandal was announced. Revenue has dropped since the last quarter, but this is normal as it followed the holiday season quotes, and in early July 2018, the United Kingdom’s Information Commissioner’s Office announced it intended to fine Facebook £500,000 ($663,000) over the information scandal, this being the highest penalty allowed at the time of the violation, stating that Facebook violated the law by failing to safeguard people’s data.
In March 2019, a court filing by the US Attorney General for the District of Columbia alleged that Facebook knew of Cambridge Analytica’s inappropriate data gathering practices months before they were first publicly reported in December 2015.
In July 2019, the Federal Trade Commission voted to approve fining Facebook about $5 billion to finally settle the investigation to the scandal, with a 3.2 vote.
During his testimony before Congress on April 10, 2018, Mark Zuckerberg stated it was his personal mistake that he didn’t do enough to prevent Facebook from being used for harm, but this guy isn’t a stupid guy unless he’s suddenly hidden his brains up his sleeve.
But it was his mistake, after all, he runs Facebook, and he’s accountable for what happens there.
He said in 2010 that the thing he really cares about is the mission, and making the world open, and when asked whether Facebook could earn more revenue from advertising as a result of its extraordinary growth, he said: “I guess we could”.
He further said: “We’re not like that. We make enough money” and that Facebook makes enough money and that they’re growing at the rate that they want, but in 2010, Steven Levy, who wrote the 1984 book Hackers” Heroes of the Computer Revolution wrote that Mark Zuckerberg clearly thought of himself as a hacker.
The Criticism of Facebook originates from the company’s prominence and has led to international media coverage and significant reporting of its legal problems and the outsize impact it has on the lives and health of its users and employees, as well as its influence on the way media, particularly news, is reported and shared.
Well-known issues include Internet privacy, such as its use of a widespread “like” button on third party websites tracking users, possible unlimited records of user data, automatic facial recognition software and its use in the workplace, including employer-employee account disclosure.
Not only that, the use of Facebook can have psychological consequences, including feelings of resentment, stress, and a lack of attention, as well as, social media addiction, in some cases akin to drug addiction.
Facebook’s operations have also received coverage on the company’s electricity usage, tax evasion, real-name user requirement policies, censorship policies, the handling of user data and its involvement in the United States PRISM surveillance programme have been highlighted by the media and critics.
Facebook has come under investigation for ignoring or avoiding its accountability for the content posted on its platform, including copyright and intellectual property infringement, hate speech, incitement of rape and terrorism, as well as fake news, Facebook murder, crimes and violet incidents live-streamed through its Facebook Live functionality, and there, have been some concerns expressed concerning the use of Facebook as a method of surveillance and information mining.
Two Massachusettes Institute of Technology (MIT) students were able to use an automated script to download the publicly posted information of over 70,000 Facebook profiles from four schools (MIT, NYU, and the University of Oklahoma, and Harvard University as part of a research project on Facebook privacy published on December 14, 2005.
Since then, Facebook has reinforced security protection for users, stating that they’ve built various defences to combat phishing and malware, including complex automated systems that work behind the scenes to identify and flag Facebook accounts that are likely to be compromised, based on abnormal activity like lots of messages sent in a short period of time, or messages with links that are perceived to be bad.
A second clause that brought criticism from some users allowed Facebook the right to sell users’ data to private companies, stating that they might share data with third parties, including responsible companies with which they had a relationship with.
In the United Kingdom, the Trades Union Congress (TUC) had encouraged companies to allow their workers to access Facebook and other social-networking sites from work, provided they proceeded with care, but in September 2007, Facebook attracted criticism after it started allowing search engines to index profile pages, although Facebook’s privacy settings allow users to turn this off.
Concerns were further raised on the BBC’s Watchdog programme in October 2007 when Facebook was shown to be an easy way in which to collect an individual’s personal data in order to facilitate identity theft. However, there is hardly any personal data granted to non-friends because if users leave the privacy controls on their default settings, the only personal information visible to a non-friend is the user’s name, gender, profile, picture, networks and username.
But an editorial in The New York Times in February 2008 pointed out that Facebook didn’t actually provide a mechanism for users to close their accounts, and raised concerns that private user data remained indefinitely on Facebook’s servers.
As of 2013, Facebook gave users the option to deactivate or delete their accounts. Deactivating an account enables it to be restored later while deleting it removes the account permanently, although some data submitted by the account, like posting to a group or sending someone a message remained.
In 2013, Facebook acquired Onavo, a developer of mobile utility apps such as Onavo Protect VPN, which is used as part of an insights platform to calculate the use and market share of apps. This information has since been used to influence acquisitions and other business decisions concerning Facebook products.
Criticism of this practice emerged in 2018 when Facebook started to advertise the Onavo Protect VPN within its main app on iOS devices in the United States. Media outlets considered the app to effectively be spyware due to its performance, adding that the app’s listings did not readily disclaim Facebook’s ownership of the app and its data collection practices.
Facebook consequently removed the iOS version of the app, citing new iOS App Store policies preventing apps from performing analytics on the usage of other apps on a user’s device, and since 2016, Facebook has also run Project Atlas, publicly known as Facebook Research, a market research programme attracting teenagers and young adults between the ages of 13 and 35 to have data such as their app usage, web browsing history, web search history, location history, personal messages, photos, videos, emails, and Amazon order history, analysed by Facebook, and participants got up to $20 per month for engaging in the programme.
Facebook Research is administered by third-party beta testing services, including Applause, and requires users to install a Facebook root certificate on their phone, but in the wake of a January 2019 report by TechCrunch on Project Atlas it was alleged that Facebook circumvented the App Store by using an Apple enterprise programme for apps used internally by a company’s employees, Facebook denied the report but later announced its discontinuation of the programme on iOS.
On January 30, 2019, Apple briefly removed Facebook’s Enterprise Developer Programme certificates for one day, which caused all of the company’s internal iOS apps to become inoperable. Apple stated that Facebook had been using their membership to distribute a data-collecting app to consumers, which is a clear violation of their agreement with Apple, and that the certificates were removed to protect their users and their data.
US Senators Mark Warner, Richard Blumenthal, and Ed Markey separately criticised Facebook Research’s targeting of teenagers and pledged to sponsor legislation to regulate market research programmes.
Facebook had enabled users to deactivate their accounts but not actually eliminate account content from its servers, which meant that users had to clear their own accounts by manually deleting all of the content including wall posts, friends and groups.
Quit Facebook Day was an online event which took place on May 31, 2010, corresponding with Memorial Day, in which Facebook users asserted that they would leave the social network due to privacy concerns.
It was determined that 2 per cent of Facebook users coming from the United States would delete their accounts, however, only 33,000 approximately 0.0066 per cent of its roughly 500 million members at the time left the site.
The number one reason for users to quit Facebook was privacy concerns, 48 per cent, being followed by a general dissatisfaction with Facebook 14 per cent, negative aspects concerning Facebook friends 13 per cent, and the feeling of getting addicted to Facebook 6 per cent.
Facebook deserters were found to be more concerned about privacy, more addicted to the Internet, and more conscientious.
In August 2011, the Irish Data Protection Commissioner (DPC) began an inquiry after getting 22 complaints by europe-v-facebook.org, which was discovered by a group of Austrian students.
The DPC said in its first reactions that the Irish DPC was lawfully accountable for the privacy on Facebook for all users within the European Union and that he would review the complaints using his full legal powers if needed.
The complaints were filed in Ireland because all users who were not citizens of the United States or Canada had a contract with Facebook Ireland Ltd, located in Dublin, Ireland. Under European law, Facebook Ireland is the data controller for facebook.com, and hence, facebook.com is dictated by European data protection laws.
Facebook Ireland Ltd. was established by Facebook Inc. to evade US taxes (see Double Irish arrangement).
The group europe-v-facebook.org made access requests at Facebook Ireland and got up to 1,222 pages of data per person in 57 data categories that Facebook was holding about them, including information that was previously removed by users.
The group claimed that Facebook failed to provide some of the requested data, including likes, facial recognition data, data about third party websites that use social plugins visited by users, and information about uploaded videos, and the group alleged that Facebook held at least 84 data categories about every user.
In an interview with the Irish Independent, a spokesperson stated that the DPC would go and audit Facebook, go onto the premises and go through in grand detail every aspect of security, and the spokesperson said that it would be a really significant, comprehensive and intensive undertaking that would extend over four or five days.
And in December 2011 the DPC published its first report on Facebook. This report was not legally binding but recommended adjustments that Facebook should undertake until July 2012 where they then plan to do a review of Facebook’s progress in July 2012.
These changes were regarded as not sufficient to comply with European law by europ-v-facebook.org, and the download tool did not allow, for instance, access to all data, and the group launched our-policy.org to suggest improvement to the new policy, which they saw as a backdrop for privacy on Facebook.
Since the group managed to get more than 7,000 comments on Facebook’s pages, Facebook had to do a global vote on the proposed reforms, and such a vote would have only been binding if 30 per cent of all users would have taken part.
An editorial written by USA Today in November 2011 alleged that Facebook generates records of pages visited both by its members and by non-members, relying on tracking cookies to keep track of pages visited, but at the beginning of November 2015, Facebook was ordered by the Belgian Privacy Commissioner to stop tracking non-users, quoting European laws, or they risked penalties of up to £250,000 per day.
As a result, instead of removing tracking cookies, Facebook banned non-users in Belgium from seeing any material on Facebook, including publicly posted content, unless they sign in, but Facebook reprimanded the decision, by saying that the cookies implemented better security.
In 2010, the Wall Street Journal found that several of Facebook’s top-rated apps were sending identifying data to dozens of advertising and Internet tracking companies. The apps used an HTTP referer that revealed the user’s identification and sometimes their friend’s identities, but Facebook said that while knowledge of user ID did not allow access to anyone’s private data on Facebook, it did intend to introduce new technical systems that would dramatically restrict the sharing of User ID’s.
And a blog posted by a member of Facebook’s team further asserted that press reports had elaborated the implications of sharing a user ID, although still acknowledging that some of the apps were passing the ID in a way that infringed Facebook’s policies.
And in 2018, Facebook acknowledged that an app made by Global Science Research and Alexandr Kogan, related to Cambridge Analytica, was able in 2014 to collect personal data of up to 87 million Facebook users without their permission, by utilising their friendship connection to the users who sold their data via the app.
Following the revelations of the breach, several public figures, including industrialist Elon Musk and Whatsapp cofounder Brian Acton, announced that they were deleting their Facebook accounts, using the hashtag#deletefacebook.