Privacy meets social in the new Sand app – personal analytics for individuals

Social media analytics apps like Hootsuite and Buffer have largely been the domain of marketers. The average person has no idea what time of day their posts get the most engagement — or which day of the week. They have no concept of which content over the last year received the most likes, comments or shares — other than the fact your friends and family from opposite political views have finally disengaged. The problem gets even harder trying to track that across social networks.

The new Sand app, powered by digi.me’s Private Sharing technology, provides dozens of personal analytics on social data from Facebook, Instagram, Twitter, Pinterest and Flickr. It just launched in the App Store. We’d love to know what you think.

The first video below is a short overview of the app itself, featuring my own analytics. Not surprisingly, my World Cup posts beat out my best thoughts on data and privacy in total reach and engagement, but the granular detail of my hashtags, mentions and even keywords was fascinating and enlightening.

The second video features a conversation with digi.me EVP for Technology Tarik Kurspahic. It explains what’s happening between the secure digi.me library, where my raw social data lives, and the algorithms and analytics inside the Sand app. You’ll find out how such powerful “edge processing” is done and the basics of “app to app private sharing.” I think you’ll enjoy it whether you are new to privacy and data — or if you are a founder or developer looking to build a new app based on your own ideas (digi.me has over 15,000 sources of data to choose from via a single SDK).

 

It’s business as usual for privacy at the US Chamber of Commerce and Internet Association

With the exception of a call for greater transparency around how companies collect and use data — a growing bi-partisan, public-private sector bright spot in the American debate on privacy — the US Chamber of Commerce’s ten new privacy principles and the Internet Association’s almost identical principlesreleased today reflect long-standing industry hostility towards effective government regulation and privacy more broadly. The principles are mostly an extension of the “trust us to do the right thing” argument they’ve been making for years, which have failed miserably.

The Chamber’s very first principle to prohibit state laws altogether on the subject is a not-so-subtle swipe at the popular new law on privacy in California, which industry fought tooth and nail. While imperfect, the law marked an important watershed in popular awakening to the abuses and dangers of the current “click here so we can own your data” model. The Chamber goes on to say in this first principle that “the United States already has a history of robust privacy protection,” which, in addition to being downright cynical and wrong, signals a new round of opposition to meaningful government oversight or intervention.

Their principle on harm-focused enforcement is another clearly outdated and limited approach, as is the call to prohibit individuals from being able to bring an action based on an infringement of their privacy. Together, they completely marginalize us as citizens and consumers, and ask us to trust the system to work on our behalf.

Meanwhile, the Internet Association has loopholes and doublespeak galore. Almost all references to data rights are bounded by phrases like “personal information they have provided,” which often amounts to less than 1% of data collected or purchased by companies. The coup de grace: “individuals should have meaningful controls over how personal information they provide to companies is collected, used, and shared, except where that information is necessary for the basic operation of the business…” When the entire business is predicated on advertising or personalized content and services, I’m not sure what is left really.

As a skeptic myself toward most prescriptive government regulations — I’d rather see innovative new tools and business models solve market and societal failures wherever possible — I spent years watching how utterly incapable industry is of reforming itself when it comes to data and privacy. There is simply too much money and power tied to them while all of the negative externalities fall on us as users — a textbook market failure.

That led me, in addition to my startup efforts on privacy, to work on a number of initiatives that helped create the principles and specifics for the new EU regulations known as GDPR (General Data Protection Regulation). These laws, also imperfect, not only aim to curb current abuses, they mandate far greater transparency and provide a roadmap for a fairer and more sustainable data and privacy model built around the rights of individuals about how their data is used.

Criticized for stifling innovation, GDPR is actually doing the opposite — it is catalyzing the private sector to start building new services that empower people directly with their data, competing both over how much value they can create for users if given access to their data while also showing what good stewards they can be of that data. It’s turning the “race to the bottom” we’ve seen around data and privacy into a much more enlightened and compelling “race to the top.”

Not surprisingly, the Chamber and most US companies have not been fans of GDPR. The lip service given in the principles to “privacy innovation” is a far cry from the vision and efforts underway in Europe, and nowhere do they reference our rights as citizens or consumers. In fact, as mentioned earlier, they only seek to limit those rights.

The most concerning potential development is the use of regulation shaped by these industry lobbying groups to further entrench their power and disadvantage startups and newcomers. The Electronic Frontier Foundation and others have been sounding the alarm on that possibility, and my read on the recent Congressional hearings by Facebook and Twitter is that this is their new strategy. In fact, the degree to which these privacy principles mimic the principles of GDPR while undermining them at every turn is nothing short of dastardly.

To conclude on a positive note, transparency is the single most important key to addressing the worst abuses around privacy and to unlocking a private sector competition to do right by users and their data. Despite 20 years with the curtains drawn tight around data collection and exploitation by industry, it’s simply un-American to stand against greater transparency — which is why both Republicans and Democrats are in favor of it.

Embracing the Chamber’s and the Internet Association’s call for transparency is the perfect jujitsu opportunity for those of us who want to see a more pro-user, pro-privacy model emerge. The real battle will be over just how far it goes, over how much we truly get to see and understand how our data is collected and for what purpose. Once that genie is out of the bottle, we can expect the private sector to get back to what it does best — creating even more incredible data-driven services that truly meet our needs and interests.

Trump’s right on this — it’s time to rip open the black box at Google, Facebook and Twitter

 

Image result for facebook google twitter logos

It’s not often that I find myself siding with President Trump and FCC Chairman Ajit Pai on technology policy. As we watch today’s congressional hearings with Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey — and the “empty seat” for Google who refused to send a senior executive — they are dead right in their call for greater transparency. The stakes are just too high to continue to allow these mammoth platforms to decide behind closed doors how to collect data about us, filter the content we see and manipulate our decision making. Regulators must act, as they have done in Europe. So too must we as citizens.

I find it unlikely that these companies purposefully bias their search results and content feeds against Trump and Republicans. In fact, most evidence so far of the weaponization of Facebook by outside actors like the Russians and Cambridge Analytica shows that they have more often exploited the platforms to support Trump and his view of the world. But their algorithms certainly contain all kinds of biases that we need to understand, and the lack of transparency raises unanswerable questions that not only make such concerns possible, they prevent government and us as individuals from responding effectively.

And, make no mistake, these platforms were designed from the start to influence our thinking and behavior. Click by click, terabyte of data by terabyte of data, they track our every move, building sophisticated profiles of each of us to make it easier for content and advertising to reach us. In fact, the first big Facebook breach of trust was an internal Facebook project to see if they could affect a user’s emotions by elevating posts with happy or sad content. They were so proud they published their findings for other data scientists to review. Rather than see the project as a psychological study with human subjects requiring clear consent of the participants, Facebook saw it, as one executive told me at the time, “as what we do every day with A/B testing in advertising.”

It’s no accident that Mark Zuckerberg’s called the challenge of confronting Facebook in his op-ed in today’s Washington Post an “arms race.” Only the largest of organizations have the resources to even participate in such a vast and expensive exercise, structurally limiting the ability of new companies and ideas to emerge. Sheryl Sandberg’s testimony is a laundry list of initiatives Facebook has undertaken recently to address these threats, most of which should have been undertaken years ago when they were warned about these problems but chose to ignore them because it was bad for business. (I, like many others, met privately with Facebook in 2016 to express my concerns while also encouraging them to act publicly.)

The Electronic Frontier Foundation (EFF) and others have rightfully warned that the massive efforts by the big platforms to shape privacy and data policy is designed to ensure their long-term domination, especially around ownership and control of our data. I share this concern, and saw it first hand in Europe five years ago while leading a data initiative at the World Economic Forum. Thankfully European regulators, backed by citizens voicing their deep concerns, managed to hash together a forward-looking set of laws that came into effect this past May (GDPR) predicated on transparency and users getting access to their data to use however they choose — and with absolutely clear consent.

The Congress and the Administration must insist on the same here in the United States. There simply isn’t any way we can continue on the current path, no matter how much Facebook, Twitter and Google say they can save us. Because “saving us” involves saving their business model, which created the problem in the first place. It’s time for new ideas and new solutions.

Digi.me launches ‘iTunes of personal data’

dm-app-store-blog

I know that’s kind of a bold statement – and likely to ruffle the feathers of our blockchain-loving, decentralized-worshiping friends. But we are excited to announce the official launch of digi.me’s app store (little “a”, little “s”), which you can find at digi.me/share. (And our architecture is almost entirely decentralized and distributed…with just a few points of centralization to make sure it actually works and is secure.)

There just isn’t a better way to tell you what we are up to than that. Imagine developers building apps in a matter of days with the ability to request data from over 15,000 different sources from users – all with cutting edge privacy and security protections. And, more importantly if you’re a developer – all using one SDK! Yes, a single integration for more normalized, structured data than you can probably handle.

We will be rolling out new apps weekly, but we are announcing 9 new apps today. You can read our press release here.

This is the realization of a dream I have personally been working on for over 8 years. Probably the most infuriating response I’ve heard from Silicon Valley during that time is that people really don’t care about privacy because they keep using online services like Facebook and Google. That’s like saying people don’t care about clean air because they keep breathing.

The simple fact is that easy-to-use tools and apps designed from the ground up with privacy in mind (called “privacy by design”) just haven’t been available. That’s about to change. And we hope you’ll reach out to help us make certain it does. Whether you’re a developer, regulator, corporate CEO or concerned citizen, we’d love to hear what you think…and show you what we’re up to.

Why digi.me is launching a new API and SDK for integrated social data

This post was co-written by Shane Green (@shanegreen) and Tarik Kurspahic (@tariktech) and originally appeared on Medium.

Anyone familiar with digi.me and our mission knows we are focused on empowering people with their data. We are building a data-driven future aligned with the needs and interests of people — where individuals can securely and privately aggregate, analyze and share massive quantities of data from across their life.

This user-centric approach to data also holds promise for developers and companies who want to collaborate with their users in a win-win data partnership. We think social data is a great place to start.

We have launched a new API and SDK for accessing normalized, integrated user data from five of the top social networks: Facebook, Instagram, Twitter, Pinterest and Flickr.

The idea is simple:

— A single integration to access tons more social data from your users wherever they may be

— The ability to establish your own terms of service with your users by asking them for their data and breaking free of the terms of service and restrictions from social networks

— Wicked new opportunities to innovate

— Protection from regulators by requesting permission from your users and embracing transparency

— Democratizing data by promoting the mission of empowering people

A single integration for tons more social data

Digi.me’s consumer app allows users to import their social data from five of the leading social networks. Recent court cases in Europe have affirmed the right of users to download and sync complete copies of their data, including their own posts, photos, videos, likes and comments, as well as many of the same types of data from friends where they have been tagged.

Without ever seeing, touching or holding a user’s data, digi.me makes it easy for users to connect to their various accounts and get a complete library of their social data. Our ontology, data normalization and standardization techniques ensure the data is easily accessible and reusable via a single API and SDK.

Your users will need the digi.me app to connect to their accounts and fetch their data. From there, your app needs to ask the user for consent to access it under terms you agree to with your users. Once the user approves your request, you get access to the requested data under terms you set with the user.

Break free of onerous terms of service

Again, due to our unique architecture and business approach, the users themselves are not subject to the normal terms of service of social networks that apply to businesses. Once users download their own copy of all of their social data (which they hold — not digi.me), they are free to share it however they choose and without restrictions.

So you can enjoy the peace of mind knowing that you have the ability to collaborate with your users and get permission to access the data that drives your business.

More data + new rules = more opportunity to innovate

We are constantly amazed at the things people build when they have access to data and the freedom to innovate. Digi.me provides a permission-based way for you to seek access to ever-expanding datasets far beyond social, including financial, wearables, health and entertainment directly from your users.

Never before has such a combination of up-to-date datasets been available to analyze and leverage.

Speaking of innovation, we decided to put the API through its first real test by putting on a hackathon at Reykjavik University in Iceland and the results were nothing short of awesome. Check out this page to see what smart people like you are already building on digi.me.

Regulators will love you

Instead of worrying about the uncertain regulatory environment, lean in to a user-centric model, a favorite of regulators in both Europe and the United States.

Digi.me has been recognized by regulators as the ideal approach for a fair, ethical and sustainable data-driven future. Everyone is a winner — consumers, companies, developers. Plus, in Europe, digi.me is entirely compliant with the new General Data Protection Rules (GDPR).

Your customers will love you

Your users won’t forget that you introduced them to this revolutionary new way of being in control of their digital lives. Help your users break free of data monopolies. Study after study shows people are deeply uncomfortable with the current model.

It’s not just great marketing, be among the first to do the right thing by your users.

We are already working with people to change the world and create innovative solutions, but we are just getting started. We’d love to hear what you think!

The Personal Data Economy at K(NO)W Identity Conference

I was happy to take part in the inaugural K(NO)W Identity Conference, organized by several ex-Googlers through their new organization One World Identity.

Although it turned out to be one of the more thoughtful discussions I’ve participated in on the emerging personal data ecosystem (hats off to Electronic Frontier Foundation’s Rainey Reitman for excellent moderating), it also shows the challenges of discussing such a complex subject in a room full of folks working on identity, privacy, security and data.

The biggest area of misunderstanding remains around the many win-win benefits for both individuals and companies when users are empowered with their data. Watch the video and let me know what you think @shanegreen.

https://www.youtube.com/watch?v=AUhCVYUQ0vM

Why Personal.com “graduated” to TeamData today

teamdata-logo-grey

Ben Horowitz was right after all. He told us a few years ago that our model of user-centric data management was all wrong for consumers, but that it just might work in the enterprise. Realizing we weren’t buying, he sent a nice follow up email to encourage us to seriously consider changing our focus. We were so convinced we were right I’m not even sure if we wrote back (sorry Ben).

Today Personal.com and our Personal Data Cloud solution are becoming TeamData, a reflection of our shift toward solving critical information management and data collaboration needs of companies and their employees, as well as with consultants, vendors and customers.

Enter the enterprise. Despite game-changing transformation from team productivity and collaboration solutions in recent years, employees still have to hack their own standalone solutions to organize the information they constantly need to get stuff done — like spreadsheets, notes apps and even contact cards in their address book. Meanwhile, email, messaging, calls and in-person interruptions remain the standard for requesting and sharing data. Entire classes of jobs continue to exist solely to organize, manage and update information manually for teams and their members.

 

A MindMap showing approx. 10% of the data graph of a company

Most existing solutions for team productivity excel at unstructured data (e.g. files or notes) or messaging and project management. And the few products that understand data, like password managers and digital wallets, are limited in the types of data they manage and their security was not designed for collaboration.

The reality is we’re all still kickin’ it old school when it comes to organizing and sharing information.

Current solutions do not solve the complex challenges of structured, reusable data — which is hard to protect, growing exponentially, changing constantly and needed in super-unique combinations for different lengths of time by people inside and outside of companies.

That’s because data is a related, but altogether different game that requires deep understanding of the data itself combined with granular permissions to enable its reuse in limitless combinations while providing entirely new types of security (e.g. we follow Privacy-by-Design principles).

As we started re-architecting the Personal.com platform and data library for team collaboration six months ago, early adopters started reporting compelling evidence of the benefits. Here is one recent example from Onboardly for content marketing teams:

All time top tools to keep your team on track…Securely stores just about all the details that your brain doesn’t ever seem to absorb.” — Onboardly

 

productivity-benefits-image

What’s so special about networked, structured data is that it can be reused over and over across an entire company, and everyone with permission automatically has access to the most up to date version when anyone makes a change (they can also have their access turned off).

There is literally only one copy of the company name, address and Federal Tax ID in a TeamData graph. One instance of the company social media account logins, demo server credentials, and visitor wi-fi. And so on, for over 1,200 different types of data covering thousands of different tasks.

Finally, networked, machine-readable data will also unlock new kinds of innovation when employees and companies grant permission to apps and analytics tools, like the new generation of AI-driven digital assistants.

We are still passionate about our vision to empower consumers with data. We already see employees starting to form teams outside of the office using their private data, and know they will discover whole new ways to use our tools.

For now, we’re excited to keep our heads down and keep solving all the challenges companies and employees face every day. Give it a try and let us know what you think — teamdata.com.

This post was originally published here in Medium.

What’s the Right Model for Personal Data Stores?

This post was originally published on Ctrl-SHIFT

Last week the US based personal data store Personal announced it was moving from a free to a paid for service. We caught up with co-founder and CEO Shane Green to ask him about the background.

1. What does this move from Personal suggest about the evolution of the PDS market as a whole?

I think it signals the emergence of a robust, stable business model that both consumers and companies already understand – you pay for a service or product that is valuable.

We believe the heart of the current PDS or personal data vault market opportunity is around productivity and convenience, in the vein of Evernote or Dropbox. While we see incredibly interesting VRM-type opportunities on the horizon, we have found that people immediately understand the value of a PDS if their lives are made easier and more convenient by it.

For that reason, we’ve priced our core service – $29.99 per year – at a price point that individuals are already paying for similar type services. And given the sensitive and private nature of certain data in a PDS, we have also found that people have more confidence in the privacy and security protections offered when there is a cost associated. And, of course, they don’t have the lingering suspicion about “being the product” that is being sold as can be the case with a free service.

I would add that we have also found a meaningful appetite among companies in certain verticals, such as banking, insurance, health and education, to help pay for such a service if it brings value to their customers. The fact that they are willing to pay for the service while respecting the integrity and sanctity of an individual’s control over a PDS is a huge step forward for the industry as a whole.

For more background on our announcement, please see these two links:
http://www.bizjournals.com/washington/blog/techflash/2013/06/personalcom-is-now-charging-but-its.html?page=all http://blog.personal.com/2013/06/leaving-beta-becoming-a-paid-service/

2. What are the particular things that you think people will be prepared to pay for? (is it safe storage of my data, easy form filling etc.)

As I mentioned, it all starts with productivity and convenience that is demonstrable in their lives. Our paid service will give the user a number of benefits: secure storage of and constant access to important data, notes and files; secure sharing of this information with other people, apps and organizations; easy reuse of data through automatic form-filling; and easy imports of information and documents from companies and organizations they do business with. In addition, we will charge for additional services and benefits around security and other functionality. Stay tuned for details.

While our initial go to market partners see this as more of a value added service to offer their customers, what will take the paid service to the next level is evidence that connecting with customers via a PDS and private network can impact both their top and bottom lines.

3. It’s often said individuals are not prepared to pay for services like these. Is this conventional wisdom wrong? If so, why?

I think conventional wisdom reflected a certain reality that is changing. Quite simply, PDS services have not delivered enough tangible value to date. They have been too limited in the benefits they delivered, and most have been quite a lot of work to make useful.

But all of the hard work by the current wave of PDS providers is paying off, both because the products are delivering more value, and the macro trends could not be coming together more powerfully, whether it is the pervasiveness of smart phones in our lives, the explosion of connected devices and the data they generate, or the privacy and security discussions now dominating headlines.

Organizations like Ctrl-Shift, the World Economic Forum and the Aspen Institute have all been documenting this shift recently, including focusing on big companies and governments (such as the US and UK) that are innovating around ways to give data back to people. It all points to the fact that the concepts of data vaults and individuals gaining greater control and value over their information is becoming mainstream. Previously, people didn’t have the technology to properly leverage that information. That has changed with the emergence of a number of start-ups, including Mydex, Paoga, Qiy, Respect Network, and Personal. But this trend is not limited to start-ups. The World Economic Forum’s recent report highlighted Fortune 1000 companies that are also providing products and services that empower people with their information.

4. What areas will Personal be looking to develop with this new investment?

Our main focus is on making our PDS available when and where it is needed in a person’s life. We are super focused on the different contexts where people need data – or are creating data – and how to make the PDS seamlessly integrated while maintaining user control with privacy and security.

Ironically, many of those contexts involve companies, organizations, schools, hospitals, government agencies, apps, sites, etc. – third parties of all kinds. There is just no reason people need to manage such data interactions separately across what we call their “personal data graph.”

Just as the Bring Your Own Device (BYOD) movement was inevitable, we believe the Bring Your Own Data movement is just an inevitable. But a BYOD world requires new kinds of permission and trust, and as that happens, I think you are going to see a wave of BYOD apps that are a magnitude better than current apps that either track us or require social data sharing. We are excited to help catalyze that ecosystem.

Note

Ctrl-Shift’s Personal Data Store report analyses the growth of the PDS market, while our Breakthrough Efficiencies research look at some of the benefits organisations can reap by linking to new personal data management services.

Data Vaults Go Mainstream at World Economic Forum

This post was originally published under the same title on the Personal blog, A Personal Stand and can also be found on the World Economic Forum Rethinking Personal Data website 
WEF.v1

In the last six months, a fast growing and somewhat unexpected chorus has emerged around the need to give people greater control over their personal information.

Mainstream think tanks are now focused on it – see the recent Aspen Institute report, which focuses extensively on “the new economy of personal information” and the central role of individuals in it.

Governments are also catalyzing this new model. The Midata initiative in the U.K. and the Open Data initiative in the United States are giving back government-collected data to citizens in organized, reusable form.

But what’s most interesting is the growing realization among companies that their futures are tied to building new relationships with consumers who are increasingly empowered with and savvy about their digital data, and who have growing concerns about how their data is captured and used.

That’s why a new report released today by the World Economic Forum, whose membership is made up of Fortune 1000 companies, is so important. “Unlocking the Value of Personal Data: From Collection to Usage” is a product of the Forum’s multi-year Rethinking Personal Data Project, and was led by Forum official Bill Hoffman (see his blog today on the report) and a steering committee of the Boston Consulting Group, Kaiser Permanente, Visa, Microsoft, AT&T and VimpelCom. Personal also participated, and is a member of the Forum’s Global Agenda Council on Data-Driven Development.

When you consider the organizations behind the report, its major conclusions are all the more dramatic:

    • Companies and governments need to put people at the center of their data, empowering individuals to engage in how their data flows through technology. This means giving consumers greater access to and control over their information as well as the tools to benefit directly from it.
    • We need to move past old notions of privacy that revolved around simple notice and consent. Instead, companies should adopt Privacy by Design principles that address every stage of product, technology and business development. This would ensure, for example, that apps feature user-driven permissioning of data and have greater transparency and control over how it’s used and valued.
    • The report blows a hole through the canard that e-commerce and privacy cannot peacefully coexist. It’s not a zero-sum game. Instead, it’s a win-win for businesses and consumers where even more data can flow between trusted parties.
    • Perhaps most exciting, the report detailed a number of use cases in which companies are helping consumers to leverage their personal information to improve their lives, ranging from health care (Kaiser Permanente) to financial data (Visa) to automotive price transparency (Truecar) to online reputational information (Reputation.com).
    • Personal was also profiled to demonstrate how personal data vaults can make the time-wasting tradition of form filling obsolete, saving literally billions of hours annually, and greatly improving the delivery of public and private sector services. Check out www.personal.com/fillit to see how your company or organization can participate.

We’re excited to see the model we have been building over the past three years start to catch fire, and we expect to see a lot more progress in the next six months.

Data as a Human Right

This post was originally published on the World Economic Forum Blog.

WEF-logo

Data has the power to transform our lives – collectively and individually. What is needed to unlock the profound opportunity data affords to improve the human condition – and to defend against a multitude of threats – is not technical, but an ethical framework for its use by and beyond those who initially collect it, including providing access to individuals.

At its most fundamental level, data about individuals represents a new kind of “digital self” that cannot be easily distinguished from the physical person. Some consider it a form of property; others a form of expression or speech. Those working in the area of genomics often view personal data as the DNA sequences that make us truly unique. Whatever lens one uses, it has become increasingly clear that the consequences of how personal data is used are every bit as real for people and society as any material, physical or economic force.

Properly harnessed by ethical practitioners, the principled use of “big data” sets can improve our economies, create jobs, reduce crime, increase public health, identify corruption and waste, predict and mitigate humanitarian crises, and lessen our impact on the environment. Similarly, empowering individuals with access to reusable copies of data collected by others, also called “small data”, can help them drastically improve the quality of their lives, from making better financial, education and health decisions, to saving time and reducing friction in discovering and accessing private and public sector services. Evidence of the positive impact of leveraging data, by both institutions and individuals, abounds.

However, data, like the technology that generates it, is in and of itself neutral. It can be used for good or ill. With a proper, ethical framework, data can – and should – be leveraged for the benefit of humankind, simultaneously at the societal, organizational and individual level. Misused, its power to harm and exploit is similarly unlimited.

In fact, what raises the ethical use and respect for data potentially into the realm of a fundamental human right is its ability to describe and reveal unique human identity, attributes and behaviors – and its power to affect a person’s, and a society’s, well-being as a result. Just as in the physical world, basic rights and opportunities must be preserved.

Indeed, it is already well recognized that invasions of our digital privacy can be exploited for repression, and that technologies for sharing data can be harnessed to support freedom. More fundamentally, though, we need to extend our core rights themselves into the digital world. For example, we must adapt our notion of freedom of thought to account for the new reality that much of our thinking goes on in digital spaces – as does the management and sharing of our most private information. Preserving individual freedom will now require protecting autonomy with respect to our own data.

Clearly, cultural and regional differences regarding human rights in the analog, physical world are sure to arise in this digital, data-oriented world. We do not seek to resolve those issues, but to develop a clear framework of principles to help provide data, data access and data use the protections they deserve.