A misconfigured Amazon Web Services S3 storage bucket exposed sensitive data on consumers' financial histories, contact information, and mortgage ownership.
A major data leak resulting from yet another misconfigured Amazon Web Services S3 storage bucket has exposed sensitive information of 123 million American households. The cloud repository included data from analytics firm Alteryx, reports the UpGuard Cyber Risk Team.
Also exposed were massive data sets belonging to Alteryx partners Experian, the consumer credit reporting agency, and the US Census Bureau. Information from Experian's ConsumerView marketing database and the 2010 US Census were leaked. Home addresses, contact information, financial histories, and analyses of purchasing behavior were publicly available.
UpGuard's director of cyber risk research, Chris Vickery, found the AWS S3 bucket at the subdomain "alteryxdownload" containing sensitive data. The repository was configured to allow any AWS "Authenticated Users" to download its data, meaning anyone with a free Amazon AWS account could access the bucket's information.
"Taken together, this exposed data provide a highly detailed database of tens of millions of Americans' personal, financial, and private lives," UpGuard says. This leak is a "prime example" of how third-party vendor risk can lead to sensitive data exposure.
Each December, my familyand I face a new Tetris-like set of decisions to make about what games and gadgets the kids are requesting, what we feel is age-appropriate and what falls within our budget. And increasingly, as more of the devices marketed to our family include embedded cameras, microphones and internet connectivity, we also find ourselves weighing the privacy trade offs for each of the connected toys and devices we are considering bringing into our home. As you can imagine, it’s a real bummer for our kids. When Mom says “no” to adding the latest connected robot to the Christmas list, it feels like just the latest in a long line of roadblocks to fun. “What’s the big deal anyway, Mom?”
The big deal, of course, is that our family’s personal data is increasingly collected by default and only protected through considerable effort. In a big data-driven economy where decisions about educational or employment opportunities for our kids can be influenced by social media data or other publicly available digital footprints, there’s a lot at stake. Without strong security protections or the ability to choose privacy settings that allow us to control when a camera is turned on or off, how long data is stored or who can access the data once the device is connected to the internet, the features of many new toys start to look more like bugs.
And although our kids often assume that our sensitivities stem from my professional experience as a researcher who studies privacy, our concerns are hardly unique. In fact, across a wide range of studies, including the most recent survey by the National Cyber Security Alliance (NCSA), both parents and teens consistently rank privacy-related risks among their top online safety concerns. Parents and teens may often have a different threat model in mind (with teens more concerned about creating privacy from the adults in their lives), but the basic idea of wanting control over personal data is the same and can be a helpful starting point for conversations about family members’ shared responsibility for online safety.
Thankfully, during this busy season, there are a variety of resources to turn to when weighing whether or not a gift meets your family’s privacy checklist. In addition to the many excellent tip sheets on the NCSA website (including one devoted entirely to connected home devices), companies like Mozilla have created a holiday buyer’s guide for connected toys and gifts that makes it easy to view basic features at a glance for some of this year’s most popular toys. At the same time, even the most privacy-attentive gift givers can be on the receiving end of gifts from grandparents or friends that may introduce more connectivity or surveillance capabilities than their families are comfortable with. In those instances, when we might worry about offending someone who made a generous gesture, it can be helpful to remember that while gifts can easily be returned, there’s no easy way to return personal data once it’s been shared online.
Facebook just loosened the leash a little on its facial-recognition algorithms. Starting Tuesday, any time someone uploads a photo that includes what Facebook thinks is your face, you’ll be notified even if you weren’t tagged.
The new feature rolled out to most of Facebook’s more than 2 billion global users this morning. It applies only to newly posted photos, and only those with privacy settings that make an image visible to you. Facebook users in Canada and the European Union are excluded. The social network doesn’t use facial-recognition technology in those regions, due to wariness from privacy regulators.
Facebook has steadily expanded its use of facial recognition over the years. The company first offered the technology to users in late 2010, with a feature that suggests people to tag in photos. Backlash against the way users were automatically opted into that system is one reason Facebook’s algorithms are face blind in Canada and the EU today. Elsewhere, the company made new efforts to notify users, but left the feature essentially unchanged. In 2015, the company launched a photo-organization app called Moments that uses facial recognition to help you share photos with people in your snaps.
Facebook’s head of privacy, Rob Sherman, positions the new photo-notification feature as giving people more control over their image online. “We’ve thought about this as a really empowering feature,” he says. “There may be photos that exist that you don’t know about.” Informing you of their existence is also good for Facebook: more notifications flying around means more activity from users and more ad impressions. More people tagging themselves in photos adds more data to Facebook’s cache, helping to power the lucrative ad-targeting business that keeps the company afloat.
Once Facebook identifies you in a photo, it will display a notification that leads to a new Photo Review dialog. There you can choose to tag yourself in the image, message the user who posted an image, inform Facebook that the face isn’t you, or report an image for breaching the site’s rules.
FACEBOOK
As part of the new feature, Facebook will also notify users if someone else attempts to use their photo in a profile; Facebook says it’s trying to make it harder to impersonate other people. The company is also adding facial recognition to its service for visually impaired people that describes photos from friends in text.
How good is Facebook’s facial-recognition technology? Among the best in the world. The hundreds of billions of photos stored on the company’s servers provide ample data to train machine-learning algorithms to distinguish different faces. Nipun Mathur, of Facebook’s applied-machine-learning group declines to provide any figures on the system’s accuracy. He said the system works even if it doesn’t have a full view of your face, although it can’t recognize people in 90-degree profile. In 2015, Facebook’s AI research group published a paper on a system that could recognize people even when their faces are not visible, using other cues such as clothing or body shape. Facebook says nothing from that work is in the new product.
If you don’t like the sound of all that, you may want to take advantage of a revamped privacy control Facebook also launched Tuesday. You could already opt out of Facebook’s facial-recognition-powered photo tag suggestions, but the setting’s description delicately avoided using the term facial recognition. A new version of the setting that allows you to turn off facial recognition altogether does use the phrase, perhaps making it easier for people to understand what they’re already allowing. If you opt-out of facial recognition, Facebook says it will delete the face template used to find you in photos.
Some privacy advocates say the system should require users to opt in, rather than force them to opt out. In 2015, nine organizations walked out of a Department of Commerce process intended to develop a code of conduct for commercial use of facial recognition, including at social-media companies. Jennifer Lynch, a senior staff attorney with the Electronic Frontier Foundation, says corporate refusals to make their technology opt-in was one reason she and others abandoned the process.
Lynch argues that Facebook’s current policy prevents people from being able to make decisions about privacy and risks to their personal data. The company can instantly and silently roll out sweeping new uses for face data that affect over a billion people.
Lynch says there’s a lot of interest from retailers in using face recognition to track and target shoppers in stores, an area of business Facebook might conceivably be tempted by. A recently disclosed patent application envisions Facebook deploying face recognition for in-store payments. The social network already works with data brokers to link Facebook users’ online activity and profiles with offline behavior.
A Facebook spokesman said the company has no plans for facial-recognition products beyond the one announced Tuesday, and that the company often patents ideas never put into practice. He didn't answer a query about why Facebook didn't allow users to opt in to facial recognition.
Facebook’s stance on that may be tested in court before long. The company is fighting a suit in federal court brought by a user who says the company’s opt-out approach to facial recognition breaches an Illinois privacy law.
Nowadays Trojanized Android apps are evolving rapidly in the Google Play store and are continuously targeting users. A new malware strain Trojan.AndroidOS.Loapi consists of modular architecture which is capable of performing multiple attacks.
Security researchers from Kaspersky labs discovered the trojan dubbed “Loapi” which can physically damage the phone by downloading a Monero mining module which generates a constant load that damages the battery and phone cover.
How the Malicious files Distributed – Loapi
Loapi has not reached the Play Store. It is distributed through advertising campaigns. It hides behind some Antivirus, adult content apps, researchers found more than 20 sources that distribute Loapi. Users are redirected to the attacker’s malicious website and the file is downloaded from there.
Once installed, it checks for the root permission, but doesn’t use root privileges. The application attempts to get device administrator permissions.
Execution and Self-Protection
If Loapi obtains admin permissions, it performs various activities and won’t allow users to revoke the device manager permissions by using standard and forcing users to uninstall legitimate Antivirus by posing endless stream of popups.
Initially, it downloads the malicious app file and the second stage the DEX payload which sends the device information to the C&C servers, with the third stage the modules are downloaded and initialized.
Modules Installed
Advertisement module: Involved in the progress of aggressive ads displaying.
SMS module: used in Sending requests to C&C
Web crawling module: used in Hidden Javascript execution
Proxy module: HTTP proxy server used to organize DDoS attacks
Mining Monero: Used to perform to perform Monero (XMR) cryptocurrency mining
Researchers found Loapi connected with Trojan.AndroidOS.Podec they are having similar techniques with obfuscation, functionality and detecting root permissions for the device.
19 M California Voter Records Held for Ransom in MongoDB Attack. The records were first exposed in an unsecured MongoDB database, continuing a cyber-extortion trend.
Voter registration data for over 19.2 million California residents that was residing on an unsecured MongoDB database has been deleted and held for ransom by attackers, according to researchers at Kromtech, who discovered the incident.
This continues a series of cyber-extortion attacks that exploit the MongoDB database management system. Similar to others, in this instance, the attacker scanned the internet for unsecured MongoDB databases, found the one containing the voter data, wiped the data and left a ransom request for 0.2 Bitcoin (around $3,500 US today), Bleeping Computer reports.
The Kromtech researchers state they have not been able to identify the owner of the database. They "believe that this could have been a political action committee or a specific campaign based on the unofficial title of the repository ('cool_db'), but this is only a suspicion."
For specific details on the attack please see here.