Data Privacy Day and the fight against “digital feudalism”
Data Privacy Day was celebrated this week. Led by the National Cyber Security Alliance, the day is meant to increase awareness of personal data protection and “to empower people to protect their privacy and control their digital footprint and escalate the protection of privacy and data as everyone’s priority,” according to the website.
Many companies used the day as an opportunity to issue transparency reports, re-informing users and customers about how their data is used and and how it’s protected. Google added a new section to its transparency report, a Q&A on how the company handles personal user data requests from government agencies and courts.
Additionally, in a post on the Google blog, Google’s chief legal officer David Drummond outlined three initiatives the company has put forth to protect its users’ privacy and security, including its support of the U.S. Electronic Communications Privacy Act.
Twitter released its second Twitter Transparency Report (#TTR) this year, and in an effort to make the information more accessible, gave the report a new home at transparency.twitter.com. Twitter legal policy manager Jeremy Kessel noted in a post on the Twitter blog that in addition to releasing the new report, the company is “also introducing more granular details regarding information requests from the United States, expanding the scope of the removal requests and copyright notices sections, and adding Twitter site accessibility data from our partners at Herdict.”
Adam Popescu noted in a post at ReadWrite that this year, “privacy concerns seem more pointed than ever.” He warns of the dangers of becoming complacent in our awareness of our data and how it’s being used. “We’re stuck in an era that security expert Bruce Schneier describes as digital feudalism,” Popescu writes, “where people may be tethered to technology and online services that exploit them, often without their knowledge.” He also notes that even when armed with the knowledge of potentially invasive practices, “most people are too ingrained in their habits to do much about it.”
Popescu looks at what individuals can do to raise their data security IQs — “[b]e aware of what services you sign up for, and actually read the privacy policies that govern them.”
The struggling U.S. public school system needs bigger data
In addition to releasing his annual letter this week, Bill Gates sat down with a group of reporters to discuss how improving data-gathering efforts can help address the world’s social, health and economic problems. Dana Goldstein reports at The Atlantic, that Gates wants to apply what he learned fighting malaria and malnutrition in Africa to improving the U.S. public school system.
According to Goldstein’s report, Gates wants to address under-achievement in our school system by adjusting the data used to establish college rankings to be based “on how aggressively they recruit under-performing students, provide them with a rigorous education, and then place them in remunerative careers,” noting that teacher colleges also would be ranked on how well their graduates perform in the classroom. Gates also believes teacher salaries should be based not on seniority, but on “evidence of student learning,” and that “classroom observations, while much more expensive to effectively implement, are a key component of any high-quality teacher evaluation system, and should also be supplemented by student surveys about their teachers.” You can read Goldstein’s full report at The Atlantic.
In other education news, IBM’s Watson is headed to college. The Associated Press reports that the supercomputer will be housed at the Rensselaer Polytechnic Institute (RPI) in New York for the next three years.
In an introduction to a Q&A with RPI’s Jim Hendler about Watson’s placement, Emi Kolawole reports that the key goals of sending Watson to RPI are “finding ways to parse through the large volume of unstructured data” in the world and “to train a new group [of] individuals on how to use cognitive systems.”
Big data offers a snapshot; “long data” offers historical context
Applied mathematician and network scientist Samuel Arbesman argued this week at Wired that focusing on big data and any insights we may glean from it is too shortsighted, that big data provides “just a snapshot: a moment in time.” Arbesman says we need to start thinking about “long data”:
“By ‘long’ data, I mean datasets that have massive historical sweep — taking you from the dawn of civilization to the present day. The kinds of datasets you see in Michael Kremer’s ‘Population growth and technological change: one million BC to 1990,’ which provides an economic model tied to the world’s population data for a million years; or in Tertius Chandler’s Four Thousand Years of Urban Growth, which contains an exhaustive dataset of city populations over millennia. These datasets can humble us and inspire wonder, but they also hold tremendous potential for learning about ourselves. Because as beautiful as a snapshot is, how much richer is a moving picture, one that allows us to see how processes and interactions unfold over time?”
Arbesman argues that big data can provide context to “slices of knowledge,” but that understanding the big picture requires a “longer, more historical context.” He writes: “Datasets of long timescales not only help us understand how the world is changing, but how we, as humans, are changing it — without this awareness, we fall victim to shifting baseline syndrome.” You can read Arbesman’s full piece at Wired — it’s this week’s recommended read.
Tip us off
News tips and suggestions are always welcome, so please send them along.