There's a seemingly never-ending stream of incidents in which data stored in the cloud turns out to have been exposed[1] to the open internet for weeks. Or months. Or years. These leaks aren't necessarily related to targeted attacks or breaches, but they are dangerous exposures that stem from small setup mistakes. Maybe sensitive information wound up in a cloud repository where it didn't belong. Or data was stored in the cloud so anyone could access it without authentication controls. Or someone never changed a default password. Now, as part of a broader slew of cloud security announcements, Google Cloud Platforms will offer a potential solution to the chronic problem of misconfigured cloud buckets.

The stakes are high. Data exposures stemming from misconfigurations endanger millions of records, and the gaffes don't discriminate—any data can end up at risk. In just one memorable incident last year, a political analytics firm called Deep Root accidentally leaked personal information for 198 million United States voters[2], including names, addresses, and party affiliations.

Partially due to its widespread popularity, many high-profile data exposures—like those at Accenture, WWE, and Booz Allen—stem from misconfigurations in Amazon Web Services' Simple Storage Service (S3) buckets. But Google's cloud customers have suffered leaks as well, like misconfigurations that led to leaks in Google Groups[3]. To combat those slips, the platform is adding visibility tools through a new feature, still in alpha testing, called Cloud Security Command Center. The idea is to take stock of all of a customer's cloud components—big organizations can have a sprawling assortment of cloud infrastructure, apps, and repositories—and offer vulnerability scanning, automated checks for potentially sensitive information, and prompts about assets that are publicly accessible all in one place.

"Users can quickly understand the number of projects

Read more from our friends at