×
Facebook

Irish Regulator Probes 'Old' Facebook Data Dump (bbc.com) 13

A data leak involving personal details of hundreds of millions of Facebook users is being reviewed by Ireland's Data Protection Commission (DPC). The BBC reports: The database is believed to contain a mix of Facebook profile names, phone numbers, locations and other facts about more than 530 million people. Facebook says the data is "old," from a previously-reported leak in 2019. But the Irish DPC said it will work with Facebook, to make sure that is the case.

Ireland's regulator is critical to such investigations, as Facebook's European headquarters is in Dublin, making it an important regulator for the EU. The most recent data dump appears to contain the entire compromised database from the previous leak, which Facebook said it found and fixed more than a year and a half ago. But the dataset has now been published for free in a hacking forum, making it much more widely available. It covers 533 million people in 106 countries, according to researchers who have viewed the data. That includes 11 million Facebook users in the UK and more than 30 million Americans.
The DPC's deputy commissioner Graham Doyle said the recent data dump "appears to be" from the previous leak -- and that the data-scraping behind it had happened before the EU's GDPR privacy legislation was in effect.

"However, following this weekend's media reporting we are examining the matter to establish whether the dataset referred to is indeed the same as that reported in 2019," he added.
Databases

LexisNexis To Provide Giant Database of Personal Information To ICE (theintercept.com) 64

An anonymous reader quotes a report from The Intercept: The popular legal research and data brokerage firm LexisNexis signed a $16.8 million contract to sell information to U.S. Immigration and Customs Enforcement, according to documents shared with The Intercept. The deal is already drawing fire from critics and comes less than two years after the company downplayed its ties to ICE, claiming it was "not working with them to build data infrastructure to assist their efforts." Though LexisNexis is perhaps best known for its role as a powerful scholarly and legal research tool, the company also caters to the immensely lucrative "risk" industry, providing, it says, 10,000 different data points on hundreds of millions of people to companies like financial institutions and insurance companies who want to, say, flag individuals with a history of fraud. LexisNexis Risk Solutions is also marketed to law enforcement agencies, offering "advanced analytics to generate quality investigative leads, produce actionable intelligence and drive informed decisions" -- in other words, to find and arrest people.

The LexisNexis ICE deal appears to be providing a replacement for CLEAR, a risk industry service operated by Thomson Reuters that has been crucial to ICE's deportation efforts. In February, the Washington Post noted that the CLEAR contract was expiring and that it was "unclear whether the Biden administration will renew the deal or award a new contract." LexisNexis's February 25 ICE contract was shared with The Intercept by Mijente, a Latinx advocacy organization that has criticized links between ICE and tech companies it says are profiting from human rights abuses, including LexisNexis and Thomson Reuters. The contract shows LexisNexis will provide Homeland Security investigators access to billions of different records containing personal data aggregated from a wide array of public and private sources, including credit history, bankruptcy records, license plate images, and cellular subscriber information. The company will also provide analytical tools that can help police connect these vast stores of data to the right person.
In a statement to The Intercept, a LexisNexis Risk Solutions spokesperson said: "Our tool contains data primarily from public government records. The principal non-public data is authorized by Congress for such uses in the Drivers Privacy Protection Act and Gramm-Leach-Bliley Act statutes." They declined to say exactly what categories of data the company would provide ICE under the new contract, or what policies, if any, will govern how agency agency uses it.
IBM

Why IBM is Pushing 'Fully Homomorphic Encryption' (venturebeat.com) 122

VentureBeat reports on a "next-generation security" technique that allows data to remain encrypted while it's being processed.

"A security process known as fully homomorphic encryption is now on the verge of making its way out of the labs and into the hands of early adopters after a long gestation period." Companies such as Microsoft and Intel have been big proponents of homomorphic encryption. Last December, IBM made a splash when it released its first homomorphic encryption services. That package included educational material, support, and prototyping environments for companies that want to experiment. In a recent media presentation on the future of cryptography, IBM director of strategy and emerging technology Eric Maass explained why the company is so bullish on "fully homomorphic encryption" (FHE)...

"IBM has been working on FHE for more than a decade, and we're finally reaching an apex where we believe this is ready for clients to begin adopting in a more widespread manner," Maass said. "And that becomes the next challenge: widespread adoption. There are currently very few organizations here that have the skills and expertise to use FHE." To accelerate that development, IBM Research has released open source toolkits, while IBM Security launched its first commercial FHE service in December...

Maass said in the near term, IBM envisions FHE being attractive to highly regulated industries, such as financial services and health care. "They have both the need to unlock the value of that data, but also face extreme pressures to secure and preserve the privacy of the data that they're computing upon," he said.

The Wikipedia entry for homomorphic encryption calls it "an extension of either symmetric-key or public-key cryptography."
AI

A South Korean Chatbot Showed How Sloppy Tech Companies Can Be With User Data (slate.com) 11

A "Science of Love" app analyzed text conversations uploaded by its users to assess the degree of romantic feelings (based on the phrases and emojis used and the average response time). Then after more than four years, its parent company ScatterLab introduced a conversational A.I. chatbot called Lee-Luda — which it said had been trained on 10 billion such conversational logs.

But because it used billions of conversations from real people, its problems soon went beyond sexually explicit comments and "verbally abusive" language: It also soon became clear that the huge training dataset included personal and sensitive information. This revelation emerged when the chatbot began exposing people's names, nicknames, and home addresses in its responses. The company admitted that its developers "failed to remove some personal information depending on the context," but still claimed that the dataset used to train chatbot Lee-Luda "did not include names, phone numbers, addresses, and emails that could be used to verify an individual." However, A.I. developers in South Korea rebutted the company's statement, asserting that Lee-Luda could not have learned how to include such personal information in its responses unless they existed in the training dataset. A.I. researchers have also pointed out that it is possible to recover the training dataset from the AI chatbot. So, if personal information existed in the training dataset, it can be extracted by querying the chatbot.

To make things worse, it was also discovered that ScatterLab had, prior to Lee-Luda's release, uploaded a training set of 1,700 sentences, which was a part of the larger dataset it collected, on Github. Github is an open-source platform that developers use to store and share code and data. This Github training dataset exposed names of more than 20 people, along with the locations they have been to, their relationship status, and some of their medical information...

[T]his incident highlights the general trend of the A.I. industry, where individuals have little control over how their personal information is processed and used once collected. It took almost five years for users to recognize that their personal data were being used to train a chatbot model without their consent. Nor did they know that ScatterLab shared their private conversations on an open-source platform like Github, where anyone can gain access.

What makes this unusual, the article points out, is how the users became aware of just how much their privacy had actually been compromised. "[B]igger tech companies are usually much better at hiding what they actually do with user data, while restricting users from having control and oversight over their own data."

And "Once you give, there's no taking back."
Businesses

Why is Amazon Taunting Politicians? (nytimes.com) 110

Confronting progressive U.S. Senators Bernie Sanders and Elizabeth Warren, Amazon officials tweeted "the kind of bad-ittude you rarely see from a major corporation," writes Kara Swisher.

"Here's what was more extraordinary — and revealing — to me: One of the most powerful companies in the world could not take criticism from politicians without acting like one of the biggest babies in the world..." But why? [I]t all felt oddly emotional and risky, which is why it was clear that the decision to launch such attacks could have been made only by someone who never suffers when mistakes are made: Mr. Bezos.

Why would he take such an approach?

I don't think his intention was to influence the union vote in Alabama. Instead, the goal was to goad progressives into proposing legislation around things like data privacy and a $15 federal minimum wage that Mr. Bezos knows cannot pass without being watered down and, thus, made less dangerous to giants like Amazon. After gaining immense power in the pandemic and becoming one of the best-liked brands around, the company is now saying to Washington legislators, who have dragged their feet and held endless and largely useless hearings about how to deal with tech: I dare you to regulate us.

For Amazon, weak regulation would certainly be much better than having to talk about the very real human toll that free shipping might have on its workers. It's an attitude that we will see adopted by a lot more tech leaders who are going to try to use the momentum for regulation in their favor, rather than let it run over them. In a recent congressional hearing, for example, Facebook's chief executive, Mark Zuckerberg, sheepishly proposed changes to Section 230 of the 1996 Communications Decency Act, which gives platforms broad immunity for content posted on their sites. Many observers felt, though, that Mr. Zuckerberg's proposals were a smoke screen that would ultimately benefit Big Tech companies like Facebook.

It's high-risk, but possibly high reward, which has been Mr. Bezos' brand for his entire career, even before he was armed with all this power and money.

Privacy

Did Patient Health Information Leak Into GitHub's Arctic Code Vault? (healthitsecurity.com) 25

HealthITSecurity writes: The patient data from multiple providers appears to have been captured and subsequently leaked on the data repository GitHub Arctic Code Vault by third-party vendor MedData, according to a new collaborative report from security researcher Jelle Ursem and Dissent Doe of DataBreaches.net.

Through his research, Ursem detected troves of protected health information tied to a single developer... The databases were taken down on December 17. MedData recently released a notice that detailed the massive patient data breach, which involved information provided to the vendor for processing services... Officials discovered that an employee had saved files to personal folders created on the GitHub repository between December 2018 and September 2019, during their employment...

The impacted data included patient names combined with one or more data elements, such as subscriber ID,Social Security numbers, diagnoses, conditions, claims data, dates of services, medical procedure codes, insurance policy numbers, provider names, contact details, and dates of birth. All affected patients will receive free credit monitoring and identity protection services... This is the second report from Ursem and Dissent on GitHub repositories leaking patient data in the last six months. In August, they reported that at least nine GitHub repositories leveraging improper access controls leaked data from more than 150,000 to 200,000 patients. The data belonged to multiple providers.

The incidents highlight the importance of vendor management and the need to ensure security policies are aligned. Previous reports have shown about one-third of healthcare databases stored in the cloud, or even locally, are actively leaking data online. What's worse, misconfigured databases can be hacked in about eight hours.

DataBreaches.net wonders what happened after Med-Data reached out to GitHub about the vault's logs and removal of the code. Did GitHub provide the logs? If so, what did they show? Is anyone's Protected Health Information in GitHub's Arctic Code Vault? And if so, what happens? Will GitHub remove it...? Or will code just be left there for researchers to explore in 1,000 years so they can wade through the personal and protected health information or other sensitive information of people who trusted others to protect their privacy?

In November, 2020, Ursem posed the question to GitHub on Twitter. They never replied.

Safari

NYT: 'If You Care About Privacy, It's Time to Try a New Web Browser' (seattletimes.com) 135

This week the lead consumer technology writer for The New York Times urged readers to switch their browser from Chrome, Safari, or Microsoft Edge to a private browser.

"For about a week, I tested three of the most popular options — DuckDuckGo, Brave and Firefox Focus. Even I was surprised that I eventually switched to Brave as the default browser on my iPhone." Firefox Focus, available only for mobile devices like iPhones and Android smartphones, is bare-bones. You punch in a web address and, when done browsing, hit the trash icon to erase the session. Quitting the app automatically purges the history. When you load a website, the browser relies on a database of trackers to determine which to block.

The DuckDuckGo browser, also available only for mobile devices, is more like a traditional browser. That means you can bookmark your favorite sites and open multiple browser tabs. When you use the search bar, the browser returns results from the DuckDuckGo search engine, which the company says is more focused on privacy because its ads do not track people's online behavior. DuckDuckGo also prevents ad trackers from loading. When done browsing, you can hit the flame icon at the bottom to erase the session.

Brave is also more like a traditional web browser, with anti-tracking technology and features like bookmarks and tabs. It includes a private mode that must be turned on if you don't want people scrutinizing your web history. Brave is also so aggressive about blocking trackers that in the process, it almost always blocks ads entirely. The other private browsers blocked ads less frequently....

In the end, though, you probably would be happy using any of the private browsers... For me, Brave won by a hair. My favorite websites loaded flawlessly, and I enjoyed the clean look of ad-free sites, along with the flexibility of opting in to see ads whenever I felt like it. Brendan Eich, the chief executive of Brave, said the company's browser blocked tracking cookies "without mercy."

"If everybody used Brave, it would wipe out the tracking-based ad economy," he said.

Count me in.

Electronic Frontier Foundation

Privacy Advocate Confronts ACLU Over Its Use of Google and Facebook's Targeted Advertising (twitter.com) 20

Ashkan Soltani was the Chief Technologist of America's Federal Trade Commission in 2014 — and earlier was a staff technologist in its Division of Privacy and Identity Protection helping investigate tech companies including Google and Facebook

Friday on Twitter he accused another group of privacy violations: the nonprofit rights organization, the American Civil Liberties Union. Yesterday, the ACLU updated their privacy statement to finally disclose that they share constituent information with 'service providers' like Facebook for targeted advertising, flying in the face of the org's public advocacy and statements.

In fact, I was retained by the ACLU last summer to perform a privacy audit after concerns were raised internally regarding their data sharing practices. I only agreed to do this work on the promisee by ACLU's Executive Director that the findings would be made public. Unfortunately, after reviewing my findings, the ACLU decided against publishing my report and instead sat on it for ~6 months before quietly updating their terms of service and privacy policy without explanation for the context or motivations for doing so. While I'm bound by a nondisclosure agreement to not disclose the information I uncovered or my specific findings, I can say with confidence that the ACLU's updated privacy statements do not reflect the full picture of their practices.

For example, public transparency data from Google shows that the ACLU has paid Google nearly half a million dollars to deliver targeted advertisements since 2018 (when the data first was made public). The ACLU also opted to only disclose its advertising relationship with Facebook only began in 2021, when in truth, the relationship spans back years totaling over $5 million in ad-spend. These relationships fly against the principles and public statements of the ACLU regarding transparency, control, and disclosure before use, even as the organization claims to be a strong advocate for privacy rights at the federal and state level. In fact, the NY Attorney General conducted an inquiry into whether the ACLU had violated its promises to protect the privacy of donors and members in 2004. The results of which many aren't aware of. And to be clear, the practices described would very much constitute a 'sale' of members' PII under the California Privacy Rights Act (CPRA).

The irony is not lost on me that the ACLU vehemently opposed the CPRA — the toughest state privacy law in the country — when it was proposed. While I have tremendous respect for the work the ACLU and other NGOs do, it's important that nonprofits are bound by the same privacy standards they espouse for everyone else. (Full disclosure: I'm on the EFF advisory board and was recently invited to join EPIC's board.)

My experience with the ACLU further amplifies the need to have strong legal privacy protections that apply to nonprofits as well as businesses — partially since many of the underlying practices, particularly in the area of fundraising and advocacy, are similar if not worse.

Soltani also re-tweeted an interesting response from Alex Fowler, a former EFF VP who was also Mozilla's chief privacy officer for three years: I'm reminded of EFF co-founder John Gilmore telling me about the Coders' Code: If you find a bug or vulnerability, tell the coder. If coder ignores you or refuses to fix the issue, tell the users.
IOS

App Store Now Rejecting Apps Using Third-Party SDKs That Collect User Data Without Consent (9to5mac.com) 14

iOS 14 has brought several new privacy features, and there are more to come with App Tracking Transparency -- which will let users opt out of being tracked by apps. From a report: As the launch of this new option approaches, Apple has begun to reject apps using third-party SDKs that collect user data without consent. Developers can implement some SDKs that help them track users by a method called "device fingerprinting," which uses multiple attributes such as the device model, IP address, and other data to identify a person across the internet. Apps often use this data for deep analysis about their audience or to sell advertisements.

While tracking the user is not exactly illegal, Apple wants to put an end to apps that do this without explicit consent. As noted by analyst Eric Seufert, the company is now rejecting any apps using the Adjust SDK, which is one of those SDKs that provides device fingerprinting. There would be no problem for these developers if the Adjust SDK complied with Apple's new privacy guidelines, but this doesn't seem to be the case. Seufert detailed to 9to5Mac that the Adjust SDK not only doesn't have an option for users to opt out of being tracked, but has also been suggesting alternatives for developers to continue tracking users once Apple enables App Tracking Transparency.
Snap has explored how it can circumvent new privacy rules for iPhones, Financial Times reported Friday.
Security

'Incompetent Developers' Blamed For NZ Patient Privacy Breach of COVID-19 Vaccine Booking Systems (stuff.co.nz) 54

An anonymous reader writes: The New Zealand Ministry of Health has launched a "sweeping review" of the nation's COVID vaccine-booking system, after a data breach led to exposure of personal information for more than 700 patients. A whistleblower reported over the weekend that they could access information about other patients, which was "readily accessible within the public-facing code of the website" -- apparently hard coded.

As a response, the Ministry of Health has ordered a review of all systems made by the developer, Valentia Technologies, which also makes software used by the Ambulance service, many GP practices, and the managed isolation and quarantine system.
"It is not a coding error. It is incompetence. The developer who developed this is incompetent ... This is basic stuff," said the man who spotted the booking system problem.

"The source code of the website, flagged a few concerning features, including someone's name, and an NHI number hard coded into the website, for what reason? I don't know," he said. "We could see everyone's details. We skimmed through, we didn't look at names, but their names, dates of birth, NHI numbers for those who entered them, contact details, where they were getting their vaccinations, what time they were vaccinated."

He said it appeared that Canterbury DHB had used a modified internal system to create the booking system. "You can tell by the source code, this was never meant to be a public facing website. This was only for people to use on like iPads, in doctors' surgeries, it was not supposed to be for this."
Facebook

'Apple and Facebook's Fight Isn't Actually About Privacy Or Tracking' (inc.com) 22

Long-time Slashdot reader schwit1 quotes a columnist from Inc: Apple isn't going to stop developers from tracking you. It's also not against personalized ads, as Facebook refers to the targeted advertising it shows you based on your internet activity. If you want to share everything you do online with Facebook, Apple won't stop you. In that case, a developer can still collect the IDFA for the purpose of targeting ads or tracking conversions.

Apple is just going to require developers to be transparent about what data they want to collect and how they want to use it. Then, they have to ask your permission.

That's what the real fight is over – transparency. And, it's why Facebook is so worried.

Facebook's problem is that, if given a choice, many people will choose not to allow tracking. A recent survey from AppsFlyer, an attribution data platform, shows that almost half of all users (47 percent) are likely to opt-out of tracking.

That's the dirty little secret it would rather not talk about. Facebook doesn't want you to think about tracking, and certainly doesn't want you to have a choice.

The column includes a pithy observation. "If your business model will break because people are given a choice over whether or not you can track them, your problem isn't with Apple. Your problem is the business model."
China

How 'Rest of World' Wants to Change International Tech Coverage (medium.com) 19

Medium's tech site OneZero reports on "Rest of World" [dot org], which they call "a news site dedicated to telling technology stories about what's happening outside of North America and Europe," but founded as a nonprofit by the daughter of former Google CEO Eric Schmidt: Sophie Schmidt: We have big intractable problems in the tech and society category: misinformation, disinformation, surveillance, privacy, you name it. We're creating panels, and commissions, and we're shaking our fists at big platforms and saying, "Please fix it." And it feels a little bit helpless. But the thing that's not coming up is that every other country in the world is also dealing with it in slightly different ways.

What if the solutions to our problems lie in the sharing of those experiences, and ideas, and learnings? Expanding the dataset. It's honestly baffling. We have billions of people in the world all using technology all the time. I think the last data I saw said there's almost 5 billion people online. And depending on how you count Western versus non-Western, something like 80% of all humans live outside of the Western bubble. That means that you have almost an infinite number of parallel experiments, playing out simultaneously all around us just outside of you. So, why aren't we comparing experiences...?

Some of the interview's highlights:
  • The senior editor agrees Clubhouse might change the way that politics works globally. "But I think the second option, which we're already seeing glimmers of, is that it's going to get banned in more places. And the places where it doesn't get banned, it's going to be very closely surveilled."

Social Networks

India Antitrust Body Orders Investigation Into WhatsApp's privacy policy changes (techcrunch.com) 2

WhatsApp's planned policy changes aren't sailing smoothly in India, the instant messaging service's biggest market by users. From a report: Indian antitrust body, Competition Commission of India, on Wednesday ordered an investigation into WhatsApp's privacy policy changes, saying that Facebook-owned service breached local antitrust laws in the guise of a policy update. The Indian watchdog has ordered nation's Director General (DG) to investigate WhatsApp's new policy to "ascertain the full extent, scope and impact of data sharing through involuntary consent of users." The Director General has been ordered to complete the investigation and submit the report within 60 days. In its order, the Indian watchdog said WhatsApp's "take-it-or-leave-it" nature of privacy policy and terms of service "merit a detailed investigation in view of the market position and market power enjoyed by WhatsApp."
Facebook

Mark Zuckerberg Suggests How To Tweak Tech's Liability Shield (axios.com) 52

Facebook CEO Mark Zuckerberg will tell lawmakers his plan for "thoughtful reform" of a key tech liability shield rests on requiring best practices for treating illegal content online. From a report: Tech giants are starting to embrace changes to the foundational law that shields platforms from liability from content users post as lawmakers from both parties threaten it. In written testimony ahead of the House hearing Thursday with Google, Twitter and Facebook CEOs, Zuckerberg suggested making Section 230 protections for certain types of unlawful content conditional on platforms' ability to meet best practices to fight the spread of the content. "Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it," Zuckerberg wrote in the testimony. "Platforms should not be held liable if a particular piece of content evades its detection -- that would be impractical for platforms with billions of posts per day -- but they should be required to have adequate systems in place to address unlawful content." The detection system would be proportionate to platform size, with practices defined by a third party. The best practices would not include "unrelated issues" like encryption or privacy changes, he notes. He also suggested Congress bring more transparency and oversight on how companies make and enforce rules about content that is harmful but still legal.
Privacy

Consumer Reports: Tesla's In-Car Cameras Raise Privacy Concerns (reuters.com) 86

An anonymous reader quotes a report from Reuters: Tesla's use of in-car cameras to record and transmit video footage of passengers to develop self-driving technology raises privacy concerns, influential U.S. magazine Consumer Reports said on Tuesday. Consumer Reports said the usage potentially undermines the safety benefits of driver monitoring, which is to alert drivers when they are not paying attention to the road.

"If Tesla has the ability to determine if the driver isn't paying attention, it needs to warn the driver in the moment, like other automakers already do," said Jake Fisher, senior director of Consumer Reports' auto test center. Automakers such as Ford Motor and General Motors, whose monitoring systems do not record or transmit data or video, use infrared technology to identify drivers' eye movements or head position to warn them if they are exhibiting signs of impairment or distraction, the magazine said.
Last week, the Chinese government restricted the use of Tesla's vehicles by military staff and employees of key state-owned companies, citing concerns that the data collected by the cars could be a source of national security leaks.

Elon Musk responded by saying that Tesla would be shut down if its cars were used to spy.
Privacy

Amazon Delivery Drivers Forced To Sign 'Biometric Consent' Form or Lose Job (vice.com) 108

Amazon delivery drivers nationwide have to sign a "biometric consent" form this week that grants the tech behemoth permission to use AI-powered cameras to access drivers' location, movement, and biometric data. From a report: If the company's delivery drivers, who number around 75,000 in the United States, refuse to sign these forms, they lose their jobs. The form requires drivers to agree to facial recognition and other biometric data collection within the trucks they drive. "Amazon may... use certain Technology that processes Biometric Information, including on-board safety camera technology which collects your photograph for the purposes of confirming your identity and connecting you to your driver account," the form reads. "Using your photograph, this Technology, may create Biometric Information, and collect, store, and use Biometric Information from such photographs." It adds that "this Technology tracks vehicle location and movement, including miles driven, speed, acceleration, braking, turns, and following distance ...as a condition of delivery packages for Amazon, you consent to the use of Technology."
Mozilla

Mozilla Firefox Tweaks Referrer Policy To Shore Up User Privacy (zdnet.com) 24

Mozilla Firefox will soon include a revised Referrer Policy to tighten up queries and better protect user information. From a report: Firefox 87, due to ship on March 23, will cut back on path and query string information from referrer headers "to prevent sites from accidentally leaking sensitive user data." In a blog post on Monday, developer Dimi Lee and security infrastructure engineering manager Christoph Kerschbaumer said the latest browser version will include a "stricter, more privacy-preserving default Referrer Policy." Browsers send HTTP Referrer headers to websites to indicate which location has 'referred' a user to a website server. Full URLs of referring documents are often sent in the HTTP Referrer header with other subresource requests, and while this may contain innocent information used for purposes including analytics, private user data may also be included. Referrer policies aim to protect this data, but if no policy is set by a website, this often defaults to "no-referrer-when-downgrade," an element that Firefox says does trim down the referrer when navigating to a less secure resource, but still "sends the full URL including path and query information of the originating document as the referrer."
Facebook

US Supreme Court Rebuffs Facebook Appeal In User Tracking Lawsuit (reuters.com) 23

The U.S. Supreme Court on Monday turned away Facebook's bid to pare back a $15 billion class action lawsuit accusing the company of illegally tracking the activities of internet users even when they are logged out of the social media platform. Reuters reports: The justices declined to hear Facebook's appeal of a lower court ruling that revived the proposed nationwide litigation accusing the company of violating a federal law called the Wiretap Act by secretly tracking the visits of users to websites that use Facebook features such as the "like" button. The litigation also accuses the company of violating the privacy rights of its users under California law but Facebook's appeal to the Supreme Court involved only the Wiretap Act.

Four individuals filed the proposed nationwide class action lawsuit in California federal court seeking $15 billion in damages for Menlo Park, California-based Facebook's actions between April 2010 and September 2011. The company stopped its nonconsensual tracking after it was exposed by a researcher in 2011, court papers said. Facebook said it protects the privacy of its users and should not have to face liability over commonplace computer-to-computer communications. Facebook has more than 2.4 billion users worldwide, including more than 200 million in the United States.

The case centers on Facebook's use of features called "plug-ins" that third-parties often incorporate into their websites to track the browsing histories of users. Along with digital files called "cookies" that can help identify internet users, the plaintiffs accused Facebook of packaging this tracked data and selling it to advertisers for profit. Facebook said it uses the data it receives to tailor the content it shows its users and to improve ads on its service. [...] In its appeal to the Supreme Court, Facebook said it is not liable under the Wiretap Act because it is a party to the communications at issue by virtue of its plug-ins.

Security

A Security App's Fake Reviews Give Us a Window Into 'App Store Optimization' (vice.com) 17

A company that makes an email app that helps users encrypt their emails paid for fake reviews in an attempt to get more people to download its products, according to leaked emails obtained by Motherboard. An anonymous reader shares a report: The CEO of pEp, a Luxembourg-based company that makes the pEp email encryption apps for Android and iOS, commissioned a marketing company to write fake reviews that he himself wrote in the summer of last year. Leon Schumacher asked the marketing company Mobiaso to post 40 five-star reviews in English, French, and German to the Google Play Store. Schumacher included an Excel spreadsheet that contained the specific text that he wanted Mobiaso to use. "Super easy privacy," one fake review said. "One of the best mail applications. I have never had problems and I suggest it all the time to friends," another said.

"Can we speed up today and do 12 ratings per day do 7 reviews per day (Please use the Texts below for the right countries (that I forwarded already per earlier e-mail)," Schumacher wrote in an email to Mobiaso. pEp, short for Pretty Easy Privacy, develops email encryption apps for both iOS and Android, where it has more than 10,000 installs, according to the stats on the Google Play Store. The company, through its foundation, also funded a new library to encrypt emails using PGP, the decades old technology that allows users to encrypt emails and other files. Mobiaso advertises "iOS reviews" and "Android installs" on its website. One of the services the company offers is App Store Optimization, or ASO, which includes fake reviews. The service has several price tiers, ranging from $160 to $450. Only the two most expensive tiers include fake reviews. "Each app developer/advertiser should remember that without a good ASO search optimization, your target audience wouldn't even find or open your app page," Mobiaso says.

Social Networks

Stricter Rules for Internet Platforms? What are the Alternatives... (acm.org) 83

A law professor serving on the EFF's board of directors (and advisory boards for the Electronic Privacy Information Center and the Center for Democracy and Technology) offers this analysis of "the push for stricter rules for internet platforms," reviewing proposed changes to the liability-limiting Section 230 of the Communications Decency Act — and speculating about what the changes would accomplish: Short of repeal, several initiatives aim to change section 230. Eleven bills have been introduced in the Senate and nine in the House of Representatives to amend section 230 in various ways.... Some would widen the categories of harmful conduct for which section 230 immunity is unavailable. At present, section 230 does not apply to user-posted content that violates federal criminal law, infringes intellectual property rights, or facilitates sex trafficking. One proposal would add to this list violations of federal civil laws.

Some bills would condition section 230 immunity on compliance with certain conditions or make it unavailable if the platforms engage in behavioral advertising. Others would require platforms to spell out their content moderation policies with particularity in their terms of service (TOS) and would limit section 230 immunity to TOS violations. Still others would allow users whose content was taken down in "bad faith" to bring a lawsuit to challenge this and be awarded $5,000 if the challenge was successful. Some bills would impose due process requirements on platforms concerning removal of user-posted content. Other bills seek to regulate platform algorithms in the hope of stopping the spread of extremist content or in the hope of eliminating biases...

Neither legislation nor an FCC rule-making may be necessary to significantly curtail section 230 as a shield from liability. Conservative Justice Thomas has recently suggested a reinterpretation of section 230 that would support imposing liability on Internet platforms as "distributors" of harmful content... Section 230, after all, shields these services from liability as "speakers" and "publishers," but is silent about possible "distributor" liability. Endorsing this interpretation would be akin to adopting the notice-and-takedown rules that apply when platforms host user-uploaded files that infringe copyrights.

Thanks to Slashdot reader Beeftopia for sharing the article, which ultimately concludes: - Notice-and-takedown regimes have long been problematic because false or mistaken notices are common and platforms often quickly take-down challenged content, even if it is lawful, to avoid liability...

- For the most part, these platforms promote free speech interests of their users in a responsible way. Startup and small nonprofit platforms would be adversely affected by some of the proposed changes insofar as the changes would enable more lawsuits against platforms for third-party content. Fighting lawsuits is costly, even if one wins on the merits.

- Much of the fuel for the proposed changes to section 230 has come from conservative politicians who are no longer in control of the Senate.

- The next Congress will have a lot of work to do. Section 230 reform is unlikely to be a high priority in the near term. Yet, some adjustments to that law seem quite likely over time because platforms are widely viewed as having too much power over users' speech and are not transparent or consistent about their policies and practices.

Slashdot Top Deals