AutoTL;DR

I’m a bot that provides summary for articles on supported sites!

If you need help, contact @rikudou@lemmings.world.

Official community: !autotldr@lemmings.world.

The source code is at https://github.com/RikudouSage/LemmyAutoTldrBot.

  • 0 Posts
  • 58 Comments
Joined 1Y ago
cake
Cake day: Aug 01, 2023

help-circle
rss

This is the best summary I could come up with:


Advances in artificial intelligence are leading to medical breakthroughs once thought impossible, including devices that can actually read minds and alter our brains.

Pauzaskie says our brain waves are like encrypted signals and, using artificial intelligence, researchers have identified frequencies for specific words to turn thought to text with 40% accuracy, “Which, give it a few years, we’re probably talking 80-90%.”

Researchers are now working to reverse the conditions by using electrical stimulation to alter the frequencies or regions of the brain where they originate.

But while medical research facilities are subject to privacy laws, private companies - that are amassing large caches of brain data - are not.

The vast majority of them also don’t disclose where the data is stored, how long they keep it, who has access to it, and what happens if there’s a security breach…

With companies and countries racing to access, analyze, and alter our brains, Pauzauskie suggests, privacy protections should be a no-brainer, "It’s everything that we are.


The original article contains 796 words, the summary contains 165 words. Saved 79%. I’m a bot and I’m open source!


This is the best summary I could come up with:


"Users will be directed to the Chrome Web Store, where they will be recommended Manifest V3 alternatives for their disabled extension.

The most salient of these is the blocking version of the webRequest API, which is used to intercept and alter network traffic prior to display.

Under Manifest V2, it’s what extension developers use to stop adverts, trackers, and other content appearing on pages, and prevent certain scripts from running.

The new MV3 architecture reflects Google’s avowed desire to make browser extensions more performant, private, and secure.

Li acknowledged the issue by noting the ways in which Google has been responsive, by adding support for user scripts, for offscreen documents that have access to the DOM API, and by increasing the number of rulesets in the declarativeNetRequest API (the replacement for webRequest) to 330,000 static rules and 30,000 dynamics ones.

And by the beginning of 2025, when the API changes have been available for some time in the Chrome Stable channel, Manifest V2 extensions will stop working.


The original article contains 589 words, the summary contains 167 words. Saved 72%. I’m a bot and I’m open source!


This is the best summary I could come up with:


Google needs to pump the brakes when it comes to tracking sensitive information shared with DMV sites, a new lawsuit suggests.

This, Wilson argued, violated the Driver’s Privacy Protection Act (DPPA), as well as the California Invasion of Privacy Act (CIPA), and impacted perhaps millions of drivers who had no way of knowing Google was collecting sensitive information shared only for DMV purposes.

Likely due to promoting the website’s convenience, the DMV reported a record number of online transactions in 2020, Wilson’s complaint said.

Wilson last visited the DMV site last summer when she was renewing her disability parking placard online.

“That Plaintiff and Class Members would not have consented to Google obtaining their personal information or learning the contents of their communications with the DMV is not surprising.”

Congressman James P. Moran, who sponsored the DPPA in 1994, made it clear that the law was enacted specifically to keep marketers from taking advantage of computers making it easy to “pull up a person’s DMV record” with the “click of a button.”


The original article contains 554 words, the summary contains 172 words. Saved 69%. I’m a bot and I’m open source!


This is the best summary I could come up with:


The company’s new advertising business will encompass purchase information and customer spending habits from PayPal and its sister app Venmo, according to The Wall Street Journal.

A PayPal spokesperson tells the WSJ that the company will collect data from customers by default while also offering the ability to opt out.

When asked about the kinds of data PayPal will collect, spokesperson Taylor Watson told The Verge that the advertising platform is still in “early stages” and that the company doesn’t have “definitive answers” yet.

“Alongside the advertising business, PayPal will build transparent, easy-to-use privacy controls,” Watson says.

In January, a study from Consumer Reports revealed that Facebook gets information about users from thousands of different companies, including retailers like Walmart and Amazon.

JPMorgan Chase also announced that it’s creating an ad network based on customer spending data, while Visa is making similar moves.


The original article contains 325 words, the summary contains 143 words. Saved 56%. I’m a bot and I’m open source!


This is the best summary I could come up with:


At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called “Recall” for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC.

To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research.

By performing a Recall action, users can access a snapshot from a specific time period, providing context for the event or moment they are searching for.

For example, someone with access to your Windows account could potentially use Recall to see everything you’ve been doing recently on your PC, which might extend beyond the embarrassing implications of pornography viewing and actually threaten the lives of journalists or perceived enemies of the state.

Despite the privacy concerns, Microsoft says that the Recall index remains local and private on-device, encrypted in a way that is linked to a particular user’s account.

To use Recall, users will need to purchase one of the new “Copilot Plus PCs” powered by Qualcomm’s Snapdragon X Elite chips, which include the necessary neural processing unit (NPU).


The original article contains 596 words, the summary contains 188 words. Saved 68%. I’m a bot and I’m open source!


This is the best summary I could come up with:


The Federal Trade Commission’s Office of Technology has issued a warning to automakers that sell connected cars.

Just because executives and investors want recurring revenue streams, that does not “outweigh the need for meaningful privacy safeguards,” the FTC wrote.

Based on your feedback, connected cars might be one of the least-popular modern inventions among the Ars readership.

Last January, a security researcher revealed that a vehicle identification number was sufficient to access remote services for multiple different makes, and yet more had APIs that were easily hackable.

Those were rather abstract cases, but earlier this year, we saw a very concrete misuse of connected car data.

Writing for The New York Times, Kash Hill learned that owners of connected vehicles made by General Motors had been unwittingly enrolled in OnStar’s Smart Driver program and that their driving data had been shared with their insurance company, resulting in soaring insurance premiums.


The original article contains 401 words, the summary contains 150 words. Saved 63%. I’m a bot and I’m open source!


This is the best summary I could come up with:


Campaigners, experts and peers say the U.K. is fast becoming an outlier among democracies in the pace at which it is adopting live facial recognition (LFR) technology in the absence of firm legal underpinnings.

In contrast, the issue has rarely made headlines in the U.K. — despite Prime Minister Rishi Sunak seeking to position the country as a global leader in AI governance.

Civil society groups warn that facial recognition technology, particularly in its live form, is invasive, imperfect and risks exacerbating the same sort of structural issues in community policing that have marred policies like “stop and search.”

Under the Data Protection and Digital Information Bill, the government is proposing to abolish the position of surveillance camera commissioner, a role responsible for encouraging compliance with one of the few statutory codes that does mention LFR.

The Home Office, however, argues that a combination of common law policing powers, non-binding guidance, and human rights, data protection, and equalities legislation forms a “comprehensive legal framework” with “appropriate safeguards.”

“The U.K. is increasingly an outlier in the democratic world in taking this approach, with European countries, the EU, U.S. states and cities banning or severely restricting law enforcement use of LFR.”


The original article contains 1,386 words, the summary contains 199 words. Saved 86%. I’m a bot and I’m open source!


This is the best summary I could come up with:


It followed requests by the Dutch, Norwegian, and Hamburg Data Protection Authorities and complaints about Meta, the social media company that owns Facebook, WhatsApp, and Instagram.

“Most users consent to the processing in order to use a service, and they do not understand the full implications of their choices,” EDPB chair Anu Talus said in a statement.

But a Meta spokesperson said: "Last year, the Court of Justice of the European Union ruled that the subscriptions model is a legally valid way for companies to seek people’s consent for personalized advertising.

In November last year, privacy activist group noyb (None Of Your Business) filed a complaint with the Austrian data protection authority against Meta for introducing the subscription model.

At the time, Felix Mikolasch, data protection lawyer at noyb, said: "EU law requires that consent is the genuine free will of the user.

In February, consumer groups filed their own complaint to stop Meta giving EU users a “fake choice” between the subscription offer and consenting to being profiled and tracked via data collection.


The original article contains 556 words, the summary contains 174 words. Saved 69%. I’m a bot and I’m open source!


This is the best summary I could come up with:


The lawsuit [PDF], filed in June, 2020, on behalf of plaintiffs Chasom Brown, Maria Nguyen, and William Byatt, sought to hold Google accountable for making misleading statements about privacy.

But, as alleged in the lawsuit, Google didn’t provide the privacy it promised and implied through services like Chrome’s Incognito mode.

Chrome’s Incognito mode only provides privacy in the client by not keeping a locally stored record of the user’s browsing history.

Even so, it was sanctioned nearly $1 million in 2022 by Magistrate Judge Susan van Keulen – for concealing details about how it can detect when Chrome users employ Incognito mode.

“Google employees described Chrome Incognito Mode as ‘misleading,’ ‘effectively a lie,’ a ‘confusing mess,’ a ‘problem of professional ethics and basic honesty,’ and as being ‘bad for users, bad for human rights, bad for democracy,’” according to the declaration [PDF] of Mark C Mao, a partner with the law firm of Boies Schiller Flexner LLP, which represents the plaintiffs.

The settlement [PDF] requires that Google: inform users that it collects private browsing data, both in its Privacy Policy and in an Incognito Splash Screen; “must delete and/or remediate billions of data records that reflect class members’ private browsing activities”; block third-party cookies in Incognito mode for the next five years (separately, Google is phasing out third-party cookies this year); and must delete the browser signals that indicate when private browsing mode is active, to prevent future tracking.


The original article contains 670 words, the summary contains 239 words. Saved 64%. I’m a bot and I’m open source!


This is the best summary I could come up with:


Competition cops in Europe and the United Kingdom have started paying attention to in-app browsers, a controversial mechanism for presenting web content within native apps.

Steiner observed: “WebViews can also be used for effectively conducting intended man-in-the-middle attacks, since the IAB [in-app browser] developer can arbitrarily inject JavaScript code and also intercept network traffic.”

Nonetheless, the possibility that in-app browsers might enable code injection and traffic interception for illegitimate purposes struck a nerve among those worried about privacy and security.

Bill Budington, senior staff technologist for the Electronic Frontier Foundation, told The Register that the EFF hasn’t taken a position on in-app browsers.

Jon von Tetzchner, CEO of browser maker Vivaldi, told The Register in a phone interview about an article written perhaps a decade ago by Tim Berners-Lee on closed systems.

If you look at how they’ve implemented their choice screen and how they’re dealing with allowing browsers that are not based on WebKit and how they introduced the Core Technology fee – they kind of make everyone else look pretty good.


The original article contains 1,500 words, the summary contains 173 words. Saved 88%. I’m a bot and I’m open source!


This is the best summary I could come up with:


Israel has deployed a mass facial recognition program in the Gaza Strip, creating a database of Palestinians without their knowledge or consent, The New York Times reports.

The program, which was created after the October 7th attacks, uses technology from Google Photos as well as a custom tool built by the Tel Aviv-based company Corsight to identify people affiliated with Hamas.

Corsight, which has boasted that its technology can accurately identify people even if less than 50 percent of their face is visible, used these photos to build a facial recognition tool Israeli officers could use in Gaza.

To further build out its database — and identify potential targets — the Israeli military set up checkpoints equipped with facial recognition cameras along major roads Palestinian used to flee south.

One officer told the Times that Google Photos could identify people even when only a small portion of their face was visible, making it better than other tools, including Corsight.

According to the Forbes report, Corsight’s technology was able to take images of people “whose features had been impacted by physical trauma, and find a match amongst photos sent in by concerned family members.”


The original article contains 623 words, the summary contains 193 words. Saved 69%. I’m a bot and I’m open source!


This is the best summary I could come up with:


In 2016, Facebook launched a secret project designed to intercept and decrypt the network traffic between people using Snapchat’s app and its servers.

On Tuesday, a federal court in California released new documents discovered as part of the class action lawsuit between consumers and Meta, Facebook’s parent company.

“Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted we have no analytics about them,” Meta chief executive Mark Zuckerberg wrote in an email dated June 9, 2016, which was published as part of the lawsuit.

When the network traffic is unencrypted, this type of attack allows the hackers to read the data inside, such as usernames, passwords, and other in-app activity.

This is why Facebook engineers proposed using Onavo, which when activated had the advantage of reading all of the device’s network traffic before it got encrypted and sent over the internet.

“We now have the capability to measure detailed in-app activity” from “parsing snapchat [sic] analytics collected from incentivized participants in Onavo’s research program,” read another email.


The original article contains 671 words, the summary contains 175 words. Saved 74%. I’m a bot and I’m open source!


This is the best summary I could come up with:


For the last several months, a city at the heart of Silicon Valley has been training artificial intelligence to recognize tents and cars with people living inside in what experts believe is the first experiment of its kind in the United States.

Last July, San Jose issued an open invitation to technology companies to mount cameras on a municipal vehicle that began periodically driving through the city’s district 10 in December, collecting footage of the streets and public spaces.

There’s no set end date for the pilot phase of the project, Tawfik said in an interview, and as the models improve he believes the target objects could expand to include lost cats and dogs, parking violations and overgrown trees.

City documents state that, in addition to accuracy, one of the main metrics the AI systems will be assessed on is their ability to preserve the privacy of people captured on camera – for example, by blurring faces and license plates.

The group, made up of dozens of current and formerly unhoused people, has recently been fighting a policy proposed last August by the San Jose mayor, Matt Mahan, that would allow police to tow and impound lived-in vehicles near schools.

In addition to providing a training ground for new algorithms, San Jose’s position as a national leader on government procurement of technology means that its experiment with surveilling encampments could influence whether and how other cities adopt similar detection systems.


The original article contains 1,487 words, the summary contains 240 words. Saved 84%. I’m a bot and I’m open source!


This is the best summary I could come up with:


The probe will look at air carriers’ policies and procedures to determine if they are safeguarding personal info properly, unfairly or deceptively monetizing it, or sharing it with third parties, the agency said yesterday.

“Airline passengers should have confidence that their personal information is not being shared improperly with third parties or mishandled by employees,” said US Transportation Secretary Pete Buttigieg.

The ten airlines going under the magnifying glass are Delta, United, American, Southwest, Alaska, JetBlue, Spirit, Frontier, Hawaiian and Allegiant.

It won’t have escaped anyone’s notice, though, that airlines flying to and from the United States already are obliged to share airline passenger name records (PNR) with the US Department of Homeland Security (DHS) – including names, telephone and credit card numbers, and more, soon after they’ve booked a flight.

Besides looking into how commercial airlines deal with data, the US transport department is also looking at some other consumer rights, including a proposal to ban family seating junk fees and guarantee that parents can sit with their children for no extra charge when they fly, ruining the fun of TikTokers everywhere.

Another proposal in the works is to ensure “fee transparency” so that consumers know the cost of flying with a checked or carry-on bag and for changing or canceling a flight before they buy a ticket.


The original article contains 534 words, the summary contains 219 words. Saved 59%. I’m a bot and I’m open source!


This is the best summary I could come up with:


A lengthy investigation into the European Union’s use of Microsoft 365 has found the Commission breached the bloc’s data protection rules through its use of the cloud-based productivity software.

The EDPS has imposed corrective measures requiring the Commission to address the compliance problems it has identified by December 9 2024, assuming it continues to use Microsoft’s cloud suite.

The regulator, which oversees’ EU institutions’ compliance with data protection rules, opened a probe of the Commission’s use of Microsoft 365 and other US cloud services back in May 2021.

When the EDPS opened the investigation there was also no data transfer agreement in place between the bloc and the US, following the striking down of the EU-US Privacy Shield in July 2020.

In a series of statements during a press briefing, it expressed confidence that it complies with “the applicable data protection rules, both in fact and in law”.

The same applies to all other software acquired by the Commission,” it went on, further noting: “New data protection rules for the EU institutions and bodies came into force on 11 December 2018.


The original article contains 1,160 words, the summary contains 181 words. Saved 84%. I’m a bot and I’m open source!


This is the best summary I could come up with:


LexisNexis is a New York-based global data broker with a “Risk Solutions” division that caters to the auto insurance industry and has traditionally kept tabs on car accidents and tickets.

But “drivers are historically reluctant to participate in these programs,” as Ford Motor put it in a patent application that describes what is happening instead: Car companies are collecting information directly from internet-connected vehicles for use by the insurance industry.

In recent years, automakers, including G.M., Honda, Kia and Hyundai, have started offering optional features in their connected-car apps that rate people’s driving.

In a recent promotional campaign, an Instagram influencer used Smart Driver in a competition with her husband to find out who could collect the most digital badges, such as “brake genius” and “limit hero.”

Neither the car companies nor the data brokers deny that they are engaged in this practice, though automakers say the main purpose of their driver feedback programs is to help people develop safer driving habits.

The other automakers all have optional driver-coaching features in their apps — Kia, Mitsubishi and Hyundai have “Driving Score,” while Honda and Acura have “Driver Feedback” — that, when turned on, collect information about people’s mileage, speed, braking and acceleration that is then shared with LexisNexis or Verisk, the companies said in response to questions from The New York Times.


The original article contains 2,347 words, the summary contains 222 words. Saved 91%. I’m a bot and I’m open source!


This is the best summary I could come up with:


Last week, Sen. Ron Wyden (D-Ore.) wrote the Federal Trade Commission (FTC) urging the agency to “protect consumers and investors from the outrageous conduct” of Near, citing his office’s investigation into the India-based company.

Wyden’s investigation was spurred by a May 2023 Wall Street Journal report that Near had licensed location data to the anti-abortion group Veritas Society so it could target ads to visitors of Planned Parenthood clinics and attempt to dissuade women from seeking abortions.

Wyden’s investigation revealed that the group’s geofencing campaign focused on 600 Planned Parenthood clinics in 48 states.

The order demands that unless consumers have explicitly provided consent, the company must cease any collection, use, or transfer of location data.

Wyden wrote, “The threat posed by the sale of location data is clear, particularly to women who are seeking reproductive care.”

Near’s list of contracts included agreements with several location brokers, ad platforms, universities, retailers, and city governments.


The original article contains 683 words, the summary contains 154 words. Saved 77%. I’m a bot and I’m open source!


This is the best summary I could come up with:


The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called MathNEWS.

Where Cadillac Fairview was ultimately forced to delete the entire database, Stanley wrote that consequences for collecting similarly sensitive facial recognition data without consent for Invenda clients like Mars remain unclear.

Stanley’s report ended with a call for students to demand that the university “bar facial recognition vending machines from campus.”

Some students claimed on Reddit that they attempted to cover the vending machine cameras while waiting for the school to respond, using gum or Post-it notes.

The technology acts as a motion sensor that detects faces, so the machine knows when to activate the purchasing interface—never taking or storing images of customers."

It was only after closing a $7 million funding round, including deals with Mars and other major clients like Coca-Cola, that Invenda could push for expansive global growth that seemingly vastly expands its smart vending machines’ data collection and surveillance opportunities.


The original article contains 806 words, the summary contains 166 words. Saved 79%. I’m a bot and I’m open source!


This is the best summary I could come up with:


Avast, the cybersecurity software company, is facing a $16.5 million fine after it was caught storing and selling customer information without their consent.

The Federal Trade Commission (FTC) announced the fine on Thursday and said that it’s banning Avast from selling user data for advertising purposes.

From at least 2014 to 2020, Avast harvested user web browsing information through its antivirus software and browser extension, according to the FTC’s complaint.

“We are committed to our mission of protecting and empowering people’s digital lives,” Avast spokesperson Jess Monney said in a statement to The Verge.

“While we disagree with the FTC’s allegations and characterization of the facts, we are pleased to resolve this matter and look forward to continuing to serve our millions of customers around the world.”

In January, the FTC reached a settlement with Outlogic (formerly X-Mode Social) that prevents the data broker from selling information that can be used to track users’ locations.


The original article contains 398 words, the summary contains 155 words. Saved 61%. I’m a bot and I’m open source!


This is the best summary I could come up with:


On a recent Thursday morning in Queens, travelers streamed through the exterior doors of La Guardia Airport’s Terminal C. Some were bleary-eyed — most hefted briefcases — as they checked bags and made their way to the security screening lines.

This passenger screening using facial recognition software and made available to select travelers at La Guardia by Delta Air Lines and the Transportation Security Administration, is just one example of how biometric technology, which uses an individual’s unique physical identifiers, like their face or their fingerprints, promises to transform the way we fly.

Time-consuming airport rituals like security screening, leaving your luggage at bag drop and even boarding a plane may soon only require your face, “helping to reduce waiting times and stress for travelers,” Mr. Harteveldt said.

Dr. Morgan Klaus Scheuerman, a postdoctoral researcher at the University of Colorado who studies the ethics of artificial intelligence and digital identity, said many questions have emerged about the use of biometrics at airports: How are the systems being trained and evaluated?

And Alaska Airlines plans to spend $2.5 billion over the next three years in upgrades, including new bag drop machines, in Seattle, Portland, Ore., San Francisco, Los Angeles and Anchorage.

on a recent afternoon, Brad Mossholder, 45, used Delta’s Digital ID line to breeze through the security screening at Terminal 4 and bypass a dozen travelers in the adjacent PreCheck lane.


The original article contains 2,022 words, the summary contains 231 words. Saved 89%. I’m a bot and I’m open source!


This is the best summary I could come up with:


Toyota has insisted it takes customer privacy “extremely seriously”, but has acknowledged the data communication module (DCM) – known as the “Connected Services” feature – can only be disabled but not removed from its cars, or else drivers could void their warranty and render Bluetooth and speakers non-functional.

Following an investigation, Choice has found Toyota’s “Connected Services” feature “collects information such as vehicle location, driving data, fuel levels, and even phone numbers and email addresses”.

A Choice investigation found one customer, Matthew, claimed he only learned about the Connected Services feature a few months after buying his $68,000 Toyota HiLux when he began receiving emails asking him to register for it.

Feeling uncomfortable about the feature, the Queensland father asked the dealership to remove – not just deactivate – the technology from his car, but claimed he was told this would void the warranty and risk his insurance.

He called on the federal government to bolster safeguards and introduce prohibitions on the collection and use of personal data as a matter of urgency.

The spokesperson said that while disconnecting the sim card would not void the warranty, a customer who elected to physically remove the DCM with a third party – because Toyota won’t – “does so at their own risk”.


The original article contains 507 words, the summary contains 211 words. Saved 58%. I’m a bot and I’m open source!


This is the best summary I could come up with:


That’s the PSA (of sorts) today from Google, which in a new support document outlines the ways in which it collects data from users of its Gemini chatbot apps for the web, Android and iOS.

Switching off Gemini Apps Activity in Google’s My Activity dashboard (it’s enabled by default) prevents future conversations with Gemini from being saved to a Google Account for review (meaning the three-year window won’t apply).

To be fair, Google’s GenAI data collection and retention policies don’t differ all that much from those of its rivals.

But Google’s policy illustrates the challenges inherent in balancing privacy with developing GenAI models that feed on user data to self-improve.

Liberal GenAI data retention policies have landed vendors in hot water with regulators in the recent past.

OpenAI, Microsoft, Amazon, Google and others offer GenAI products geared toward enterprises that explicitly don’t retain data for any length of time, whether for model training or any other purpose.


The original article contains 536 words, the summary contains 157 words. Saved 71%. I’m a bot and I’m open source!


This is the best summary I could come up with:


But privacy, immigration and digital liberties experts are also concerned over another aspect of the bill: more than $400m in funding for additional border surveillance and data-gathering tools.

The lion’s share of that funding will go to two main tools: $170m for additional autonomous surveillance towers and $204m for “expenses related to the analysis of DNA samples”, which includes those collected from migrants detained by border patrol, according to the text of the bill.

The bill also includes $25m in funding for “subterranean detection capabilities” and $10m to acquire data from unmanned surface vehicles or autonomous boats “in support of maritime border security”.

The US has already spent hundreds of millions of dollars on these automated surveillance towers, which are primarily made by Anduril Industries – the brainchild of Palmer Luckey, founder of Oculus VR.

“Rather than solving immigration and border issues, this allocation is a windfall for surveillance tech vendors,” said Saira Hussain, senior staff attorney at EFF.

“It’s evident that they are presenting a sense of inevitability that technology will dictate the course of your life in the United States, whether it’s by serving as the ‘soft’ enforcer at the border or through the surveillance that will follow you into the country,” said Shah.


The original article contains 570 words, the summary contains 206 words. Saved 64%. I’m a bot and I’m open source!


This is the best summary I could come up with:


The data is unnecessary for processing notifications, the researchers said, and seems related to analytics, advertising, and tracking users across different apps and devices.

It’s par for the course that apps would find opportunities to sneak in more data collection, but “we were surprised to learn that this practice is widely used,” said Tommy Mysk, who conducted the tests along with Talal Haj Bakry.

For one, Apple gives app developers details about what’s going on with notifications directly, so there’s no need to collect additional information if you know what happened after you pinged your users.

Furthermore, a lot of the data that apps are collecting seems unrelated to analyzing how well notifications are working, like your phone’s available disk space or the time since your last reboot, Mysk said.

Mysk said if a company like Google can send you a notification without snooping on other details, that suggests there are ulterior motives for the data collection he spotted.

Unfortunately, you might have heard that big companies sometimes tell lies, which would get in the way of that solution, and Apple doesn’t have a stellar track record of enforcing similar rules.


The original article contains 1,384 words, the summary contains 191 words. Saved 86%. I’m a bot and I’m open source!


This is the best summary I could come up with:


A 61-year-old man is suing Macy’s and the parent company of Sunglass Hut over the stores’ alleged use of a facial recognition system that misidentified him as the culprit behind an armed robbery and led to his wrongful arrest.

Harvey Eugene Murphy Jr was accused and arrested on charges of robbing a Houston-area Sunglass Hut of thousands of dollars of merchandise in January 2022, though his attorneys say he was living in California at the time of the robbery.

Dutko said he discovered from police documents that the Sunglass Hut worker shared camera footage with Macy’s, which employees from the department store chain used to identify Murphy.

Just last month, Rite Aid settled with the Federal Trade Commission over its use of a facial recognition system that misidentified Black, Latino and Asian customers as people previously identified as “likely to engage” in shoplifting.

And in the summer of 2023, a woman named Porcha Woodruff was arrested on charges of car jacking due to false identification by a facial recognition system.

In a 2020 lawsuit, a Chicago woman accused the company of working with facial recognition provider Clearview AI without her or other customers’ consent in violation of Illinois’ biometric privacy law.


The original article contains 987 words, the summary contains 201 words. Saved 80%. I’m a bot and I’m open source!


This is the best summary I could come up with:


All of the big pharmacy chains in the US hand over sensitive medical records to law enforcement without a warrant—and some will do so without even running the requests by a legal professional, according to a congressional investigation.

Lawmakers noted the pharmacies’ policies for releasing medical records in a letter dated Tuesday to the Department of Health and Human Services (HHS) Secretary Xavier Becerra.

They include the seven largest pharmacy chains in the country: CVS Health, Walgreens Boots Alliance, Cigna, Optum Rx, Walmart Stores, Inc., The Kroger Company, and Rite Aid Corporation.

The rest of the pharmacies—Amazon, Cigna, Optum Rx, Walmart, and Walgreens Boots Alliance—at least require that law enforcement requests be reviewed by legal professionals before pharmacists respond.

“We urge HHS to consider further strengthening its HIPAA regulations to more closely align them with Americans’ reasonable expectations of privacy and Constitutional principles,” the three lawmakers wrote.

“Last year, CVS Health, the largest pharmacy in the nation by total prescription revenue, only received a single-digit number of such consumer requests,” the lawmakers noted.


The original article contains 714 words, the summary contains 173 words. Saved 76%. I’m a bot and I’m open source!


This is the best summary I could come up with:


However, experts believe the UAE has one of the highest per capita concentrations of such cameras on Earth — allowing authorities to potentially track a visitor throughout their trip to a country without the civil liberty protections of Western nations.

Marta Schaaf, Amnesty International’s director of climate, economic and social justice and corporate accountability, told the AP the seemingly omnipresent surveillance in the UAE created an “environment of fear and tension”.

Then, Dubai police quickly pieced together footage showing three-dozen suspected Israeli Mossad intelligence service operatives, some dressed as tennis players, who assassinated Hamas commander Mahmoud al-Mabhouh at a luxury hotel.

In late 2016, Dubai police partnered with an affiliate of the Abu Dhabi-based firm DarkMatter to use its Pegasus “big data” application to pool hours of surveillance video to track anyone in the emirate.

G42 also partnered during the pandemic with Chinese firm BGI Group, which is the world’s largest genetic sequencing company that had expanded its reach during the crisis and sought to offer services to Nevada in the US.

Xiao told The Financial Times this week his firm would cut ties to Chinese hardware suppliers over concerns from U.S. partners like Microsoft and OpenAI as it ramps up its artificial intelligence business.


The original article contains 1,123 words, the summary contains 205 words. Saved 82%. I’m a bot and I’m open source!


This is the best summary I could come up with:


And like many technologically engaged Ars Technica readers, he does not like what he sees in terms of automakers’ approach to data privacy.

On Friday, Sen. Markey wrote to 14 car companies with a variety of questions about data privacy policies, urging them to do better.

The problems were widespread—most automakers collect too much personal data and are too eager to sell or share it with third parties, the foundation found.

Markey noted the Mozilla Foundation report in his letters, which were sent to BMW, Ford, General Motors, Honda, Hyundai, Kia, Mazda, Mercedes-Benz, Nissan, Stellantis, Subaru, Tesla, Toyota, and Volkswagen.

The senator is concerned about the large amounts of data that modern cars can collect, including the troubling potential to use biometric data (like the rate a driver blinks and breathes, as well as their pulse) to infer mood or mental health.

"Although certain data collection and sharing practices may have real benefits, consumers should not be subject to a massive data collection apparatus, with any disclosures hidden in pages-long privacy policies filled with legalese.


The original article contains 282 words, the summary contains 175 words. Saved 38%. I’m a bot and I’m open source!


This is the best summary I could come up with:


Shayano Madzikanda was suspended from his job at the mining industry company Mecrus in June 2019 and was ordered to surrender his work laptop.

In a complaint to the information commissioner made in 2019, he alleged that his iCloud and personal email accounts had been accessed by his employer.

But Madzikanda claimed his employer could only have known that by reading the contents of his personal emails and accessing information from his iCloud account.

Separately, he settled with his employer through the Fair Work Commission, including a provision that his personal property be returned.

The company denied it had used personal information saved on the laptop to access his online accounts, and provided IT policies dating back to 2013.

David Vaile, the privacy and surveillance stream lead at the University of New South Wales’s Allens Hub for Technology, Law and Innovation, said: “The judgment is [unhelpful] for settling the law on this point – a consequence of the fact that a victim can’t directly litigate their legal claim, and that, as the court confirms, at present Australians still thus don’t have a ‘right’ to privacy, only a right to complain to a regulator who can, as this judgment confirms, take advantage of a wide range of justifications to do nothing if they feel like it with minimal court oversight.”


The original article contains 768 words, the summary contains 219 words. Saved 71%. I’m a bot and I’m open source!


This is the best summary I could come up with:


Cameron Ortis, the former RCMP intelligence official on trial in Ottawa, says he was tipped off by a counterpart at a “foreign agency” that the people he’s accused of leaking secrets to had “moles” inside Canadian police services.

“I had sensitive information from multiple sources that each of the subjects had compromised or penetrated Canadian law enforcement agencies,” Ortis testified last week.

The testimony is contained in redacted transcripts released Friday evening, more than a week after the former civilian member began testifying in his defence during his unprecedented trial.

The Crown alleges Ortis used his position as the head of a highly secret unit within the RCMP to attempt to sell intelligence gathered by Canada and its Five Eyes allies to individuals linked to the criminal underworld.

Ortis is accused of sharing information in 2015 with Ramos, the head of Phantom Secure, a Canadian company that made encrypted devices for criminals.

Under cross-examination, Crown prosecutor John MacFarlane asked why Ortis didn’t approach one of the Five Eyes partners to discuss his plans with them “just generally.”


The original article contains 956 words, the summary contains 175 words. Saved 82%. I’m a bot and I’m open source!


This is the best summary I could come up with:


Privacy consultant Alexander Hanff, who has occasionally contributed to The Register, has challenged Meta’s collection of data without explicit consent under Ireland’s computer abuse law.

“I have notified Pearse Street Garda that I want to give a statement to them for the purpose of the criminal complaint and will be sending them additional information over the weekend,” Hanff told us last night.

Two weeks ago, Hanff filed a civil complaint to the Irish Data Protection Commission against YouTube’s browser interrogation system, which detects ad blocking software and refuses to play videos unless adverts are allowed or subscription money handed over.

“Meta Platforms Ireland Ltd for a period of not less than five years from May 25, 2018 to present, illegally deployed surveillance technology to my computers for the purpose of monitoring my behavior, as they had no reasonable excuse or lawful authority to do so,” Hanff alleged to The Register.

“Regulators have let us down and are absolutely (in my opinion) partly responsible for the erosion of our fundamental rights and the expansion of these illegal behaviors, by failing to do their jobs and take strong enforcement action against violators,” he claimed.

“As a result, it is now considered as the normal way to conduct online business, which is an incredibly bad reflection of the regulators and has significantly eroded trust of the public that their complaints will ever be dealt with at all – let alone in a meaningful way.”


The original article contains 1,285 words, the summary contains 241 words. Saved 81%. I’m a bot and I’m open source!


This is the best summary I could come up with:


In response to five class-action lawsuits, a Washington appeals court has decided that Honda and several other automakers did nothing wrong by storing text messages and call records from connected smartphones.

Honda, Toyota, Volkswagen, and General Motors were all facing charges in separate but related class-action suits that all claimed they violated Washington state privacy laws.

“To succeed at the pleading stage of a WPA claim, a plaintiff must allege an injury to ‘his or her business, his or her person, or his or her reputation,’” the judges ruled.

In other words, it’s A-OK for your car to “automatically and without authorization, instantaneously intercept, record, download, store, and [be] capable of transmitting” text messages and call logs since the privacy violation is potential, but the injury not necessarily actual.

Per the first amended complaint [PDF] filed in the Honda case, Honda infotainment systems in vehicles manufactured from 2014 onward “store each intercepted, recorded, and downloaded copy of text messages in non-temporary computer memory in such a manner that the vehicle owner cannot access it or delete it,” plaintiffs argued.

Plaintiffs accusing Honda of WPA violations pointed to Maryland-based Berla Corporation, which manufactures equipment “capable of extracting stored text messages from infotainment systems” as a reason for owners to consider the data harvesting a privacy concern.


The original article contains 532 words, the summary contains 215 words. Saved 60%. I’m a bot and I’m open source!


This is the best summary I could come up with:


WASHINGTON, Nov 7 (Reuters) - A bipartisan team of U.S. lawmakers has introduced new legislation intended to curb the FBI’s sweeping surveillance powers, saying the bill helps close the loopholes that allow officials to seize Americans’ data without a warrant.

Reforms in the proposed legislation include putting limits on searches of Americans’ communications without judicial authorization and a prohibition of so-called “backdoor” searches which invoke foreign intelligence justifications to spy on Americans.

The White House and the FBI did not immediately return messages seeking comment, although executive branch officials have long insisted that the surveillance power - which expires at the end of the year - is a critical tool for fighting foreign espionage and terrorism and have lobbied for its reauthorization.

The reforms introduced Tuesday reflect discomfort over the practice of warrantless scans, which are authorized under Section 702 of the Foreign Intelligence Surveillance Act.

Its opponents were galvanized when the Office of Director of National Intelligence revealed in July that the FBI had improperly conducted searches for information about a U.S. senator and two state officials.

“When the FBI snoops on the American people without a warrant, it’s not a blunder, it’s a breach of trust and it’s a violation of the Constitution,” Republican Senator Mike Lee told reporters.


The original article contains 320 words, the summary contains 211 words. Saved 34%. I’m a bot and I’m open source!


This is the best summary I could come up with:


America’s immigration cops have pushed back against an official probe that concluded their lax mobile device security potentially put sensitive government information at risk of being stolen by foreign snoops.

Between April 27 and August 17, the US Department of Homeland Security Office of the Inspector General conducted an audit of equipment managed by Immigration and Customs Enforcement (ICE) and the agency’s IT policies.

This included third-party file sharing services and virtual private networks (VPN), outdated messaging platforms, and apps developed by companies banned from US government IT systems.

To be fair, some of the ICE-approved apps sound equally concerning, such as “one ICE-owned application allows ICE personnel to capture and search biometric information of people they encounter in real-time.”

Additionally, DHS, in its response to the audit, disagreed that ICE security controls did not reduce the risk to federal mobile devices and their sensitive information.

Homeland Security also claimed the percentage of ICE-managed devices that did not have mobile threat defense capability installed is significantly lower than the inspector general’s audit number.


The original article contains 612 words, the summary contains 173 words. Saved 72%. I’m a bot and I’m open source!


This is the best summary I could come up with:


Facial recognition could analyse a blown-up still taken from a security tape, sift through a database of millions of driver licence photos, and identify the person who did the crime.

Months later, the facial recognition system used by Detroit police combed through its database of millions of driver licences to identify the criminal in the grainy security tapes.

By January 2020, as Mr Williams had his mug shot taken in the Detroit detention centre, civil liberties groups knew that black people were being falsely accused due to this technology.

It would give law enforcement and security agencies quick access to up to 100 million facial images from databases around Australia, including driver licences and passport photos.

That didn’t stop the then government from ploughing ahead with its planned national facial recognition system, says Edward Santow, an expert on responsible AI at the University of Technology Sydney, and the Australian Human Rights Commissioner at the time.

Despite this, last month Senate estimates heard the federal police tested a second commercial one-to-many face matching service, Pim Eyes, earlier this year.


The original article contains 1,870 words, the summary contains 162 words. Saved 91%. I’m a bot and I’m open source!


This is the best summary I could come up with:


(tldr: 2 sentences skipped)

He also is encouraging police to operate live facial recognition (LFR) cameras more widely, before a global artificial intelligence (AI) safety summit next week at Bletchley Park in Buckinghamshire.

(tldr: 3 sentences skipped)

The campaign group Big Brother Watch has described the deployment of the technology by the police as “dangerous authoritarian surveillance” and warned that it is a “serious threat to civil liberties in the UK”.

(tldr: 3 sentences skipped)

In response to the plans, a cross-party group of MPs and peers this month also called for an “immediate stop” to the use of live facial recognition surveillance by police and private companies.

(tldr: 2 sentences skipped)

Former Brexit secretary David Davis, Liberal Democrat leader Sir Ed Davey, Green MP Caroline Lucas and former Labour shadow attorney general Shami Chakrabarti were among 65 members of the Commons and Lords who backed a call for a halt to its deployment.

(tldr: 3 sentences skipped)

The Home Office rejects such concerns, with officials saying that facial recognition camera use is strictly governed by data protection, equality and human rights laws, and can only be used for a policing purpose where it is necessary and proportionate.

The department says AI surveillance methods such as facial recognition can help police accurately identify those wanted for serious crimes, as well assist in finding missing people.

(tldr: 6 sentences skipped)


The original article contains 640 words, the summary contains 229 words. Saved 64%. I’m a bot and I’m open source!


This is the best summary I could come up with:


(tldr: 5 sentences skipped)

Some of those in attendance saw a demo of Fusus — a paid service that makes it easier for police to access privately owned security camera footage from residents and businesses.

(tldr: 12 sentences skipped)

The cameras, Barth said, are a “time-saver” for lower-priority calls like property crimes and make it easy for police to give video to lawyers requesting footage of car crashes.

(tldr: 6 sentences skipped)

She points to Clearview AI, a controversial facial recognition tool Canadian police services secretly used until privacy watchdogs ordered them to stop.

Tusikov said Fusus would be a “disproportionate response” to crimes like auto theft, which has been surging in Canada, and likely wouldn’t help with intimate partner violence, which has been declared an epidemic in Hamilton and other cities across the country.

(tldr: 17 sentences skipped)

CBC contacted Canadian police services at the Real Time Crime Center Operations and Tech Integration conference, asking if any of them use Fusus or are exploring using it or similar technology.

(tldr: 9 sentences skipped)

“We would especially encourage this given that Fusus appears to involve real-time monitoring and unmediated access to private surveillance cameras which may come with a greater risk of intrusion into the privacy of individuals,” the IPC said.


The original article contains 1,187 words, the summary contains 208 words. Saved 82%. I’m a bot and I’m open source!


This is the best summary I could come up with:


Interview Last week, privacy advocate (and very occasional Reg columnist) Alexander Hanff filed a complaint with the Irish Data Protection Commission (DPC) decrying YouTube’s deployment of JavaScript code to detect the use of ad blocking extensions by website visitors.

YouTube’s open hostility to ad blockers coincides with the recent trial deployment of a popup notice presented to web users who visit the site with an ad-blocking extension in their browser – messaging tested on a limited audience at least as far back as May.

“In early 2016 I wrote to the European Commission requesting a formal legal clarification over the application of Article 5(3) of the ePrivacy Directive (2002/58/EC) and whether or not consent would be required for all access to or storage of information on an end user’s device which was not strictly necessary,” Hanff told The Register.

"Specifically whether the deployment of scripts or other technologies to detect an ad blocker would require consent (as it is not strictly necessary for the provision of the requested service and is purely for the interests of the publisher).

Hanff disagrees, and maintains that "The Commission and the legislators have been very clear that any access to a user’s terminal equipment which is not strictly necessary for the provision of a requested service, requires consent.

“This is also bound by CJEU Case C-673/17 (Planet49) from October 2019 which all Member States are legally obligated to comply with, under the [Treaty on the Functioning of the European Union] – there is no room for deviation on this issue,” he elaborated.


The original article contains 1,030 words, the summary contains 258 words. Saved 75%. I’m a bot and I’m open source!


This is the best summary I could come up with:


A recently released video deposition in long-running lawsuit over Google tracking its users has claimed that even the CEO Sundar Pichai isn’t clear on what’s going on below him.

Jonathan Hochman, an expert witness for the plaintiffs, provided a technical analysis of Google data collection, but his report remains under seal.

Hochman in his deposition contends that even Google insiders, including Alphabet CEO Sundar Pichai, misunderstand the WAA control.

This is spelled out more explicitly in a more recent court filing [PDF]: "For example, Google CEO Sundar Pichai testified to Congress that, within ‘My Account’ user can ‘clearly see what information is collected, stored.’

“The situation I found upon the technical investigation was counterintuitive, it was not what I expected to find, and it is, frankly, kind of Orwellian, it is just very strange that you have a privacy switch that when you flip it, it just means we don’t tell you that we’re spying on you,” he said.

Last year, the judge hearing the case dismissed [PDF] claims that alleged violation of the California Invasion of Privacy Act and breach of contract.


The original article contains 1,263 words, the summary contains 183 words. Saved 86%. I’m a bot and I’m open source!


This is the best summary I could come up with:


The US Immigration and Customs Enforcement (ICE) has used an AI-powered data-scanning tool called Giant Oak Search Technology (GOST) to scour social media looking for post containing “derogatory” comments about the nation.

Immigration agencies and service providers have apparently been using the data in enforcement actions, according to an The American Civil Liberties Union lawsuit filed under the Freedom of Information Act (FOIA), and first reported by 404 Media.

“The Biden administration has been quietly deploying and expanding programs that surveil what people say on social media, often without any suspicion whatsoever,” Shaiba Rather, a Nadine Strossen Fellow with ACLU’s National Security Project, told The Register.

“These programs chill people from speaking freely online and transform social media into a platform for constant government scrutiny.”

The firm says its AI-based system allows government agencies and law enforcement to “identify bad actors by behavioral pattern rather than identity labels,” using information found on the open and deep web.

DHS has reportedly used GOST since 2014, according to documents obtained by 404 Media, and ICE has paid Giant Oak more than $10 million for the system since 2017.


The original article contains 557 words, the summary contains 186 words. Saved 67%. I’m a bot and I’m open source!