After a whirlwind week presenting UBDI at the Finovate conference in New York and at the MyData conference in Helsinki, I didn’t even notice the Slack message saying that the “NPR story was out.” Several hours later I sat down to listen to the NPR Marketplace Tech interview with my co-founder, Dana Budzyn. She was also in Helsinki and hadn’t even listened to it yet.
In a world of sound bites and 280 character tweets, the in-depth discussion between Dana and host Molly Wood was completely unexpected. We couldn’t have written a better lead in:
“Universal Basic Data Income. It’s not just an idea. There’s an app for that.”
NPR Marketplace Tech host Molly Wood
I spoke today at MyData on owning and monetizing data, the leading personal data and privacy conference in Europe. There is still so much misunderstanding and distrust around the idea. That’s not surprising given how badly people and their data have been treated in the digital world.
Hats off to NPR and Molly Wood for such a thoughtful and balanced story. We only wish our iOS app were approved by Apple for people to sign up after hearing the story. We’ll post a link soon when it’s available.
UBDI announced a great group of investors today in it’s $825k pre-seed round, which was led by DG Lab Fund II (a JV in Tokyo and San Francisco between Digital Garage and Daiwa Securities Group) an HU Investments (of New York, London and India). PurposeBuilt Ventures of San Francis also participated. You can read the press announcement here and their post on the round here.
As I recently wrote, the level of funding around privacy-preserving data empowerment businesses is paltry compared to the massive problems they help address – data privacy, security, trust, transparency, equitable participation in profits, etc. UBDI might be the first business built on a privacy by design data platform (digi.me) that has a simple, valuable enough value proposition for both consumers (monetize data at more than $1k annually in a few hours time) and businesses (improve quantitative and qualitative research results and make regulators happy) that I’ve seen.
It still takes about 10 minutes to set up, including downloading the separate digi.me app to protect and store the raw data, and the exact price points to get people to participate in studies needs to be worked out, but UBDI has an easy to use app that motivated consumers will have little problem navigating. And researchers of all kinds – market research, financial research, health research, academic research – will be blown away by what they find. Stay tuned for more announcements on the app coming out of beta.
I am excited to announce that I am joining forces with Future State (futurestate.org) as a senior advisor. Future State is a relatively new organization based in Washington, DC that is focused on data empowerment, especially in emerging markets where digital policies are at key inflection points that can more rapidly support this model. It is led by Priya Jaisinghani Vora, Kay McGowan and Jonathan Dolan, who helped develop and lead key digital, data and financial inclusion efforts at USAID.
Here is how they describe their mission:
“Future State puts the rights and aspirations of people at the center of the digital revolution. Through our research, advocacy and direct efforts to spur action by policymakers, civil society and developers around the world, we advance approaches that maximize people’s participation, individual agency, choice and trust in the digital era.”
I couldn’t be more passionate about what they are doing, especially their shining a light on the need for far greater governmental, corporate and philanthropic investment in data empowerment (more on that below).
2019 marks my 10th year working on data empowerment following my departure from Nokia (after they asked me to develop a strategy for exploiting data they were surreptitiously collecting on 1.2 billion customers). What started as a movement to put data directly into the hands of people to use how they choose has turned into the frontline battleground of digital power dynamics.
Data empowerment is broadly associated with efforts around transparency, privacy, data security and equitable exchange of value, including the direct participation in the economics of data. It is different, however, in that it has a fundamental view that none of these can be properly addressed without the individual playing a primary role in aggregating and setting permissions to their data.
Almost all of my efforts over this decade have centered around developing the building blocks required for data empowerment, including:
building a privacy by design platform for an individual to import, secure and share data;
designing data normalization and standardization methods for organizing massively heterogeneous data;
developing private sharing protocols with apps that leverage edge processing on the device; and
creating compelling use cases for individuals, developers, companies and regulators to embrace this new model.
[You can read more about those efforts – Personal, digi.me, UBDI, TFP, Fill It, TeamData, etc – and why I am so excited about the progress we have made in other posts.]
Along the way, I have had to beg (literally), borrow (literally) and steal (figuratively) to cobble together resources to build these solutions. To date, my ventures, including the combined digi.me/Personal, have raised close to $50 million – which is among the highest funded data empowerment efforts so far. But that’s an average of about $5 million annually to change the fundamental architecture and business model of the digital world. It’s a paltry sum compared to the fortune that gets invested hourly (literally) in the surveillance-based model that prevails online.
It’s time for that to change. Future State’s work will highlight success stories around the world, and provide thoughtful, practical and empirically-driven recommendations to policymakers, enlightened CEOs, developers, philanthropists and civil society leaders.
I think what happens without data empowerment is getting clearer by the day. I’m looking forward to working with these visionaries to show what the future state can look like when we all have primary agency over our data.
I was invited by the University of Michigan’s School of Engineering and Center for Entrepreneurship to give a “Ted Talk” on my entrepreneurial journey and how I came to be so passionate about empowering people with their data, privacy and identity.
I wrote one of my first blog posts in 2010 about the origins of my thinking when I was a student around concerns of “being defined by others,” so I really enjoyed this special chance to share my story. I’ve never been more convinced that our data-driven future depends on each of us having agency over our data and identity.
Congress confronted privacy and personal data rights in a pair of hearings last week. This week, Mark Zuckerberg announced a “privacy pivot” at Facebook, yet failed to propose a single tangible reform of its core business.
Zuckerberg’s privacy manifesto illustrates the core problem. Big tech companies such as Facebook have no real interest in changing their practices. Their entire business model is based on owning and exploiting personal data to manipulate people and sell advertising. They’ll defend that at all costs.
The ongoing series of privacy scandals and last year’s high profile hearings led the Silicon Valley giants, including Facebook, to hire a small army of lobbyists. Although there has been no new legislation yet, data privacy reform is in the air, and the jockeying behind the scenes is telling.
Facebook’s position and the overall surveillance-based business model has become impossible to defend. With few straightforward options, the lobbyists for the social network and other tech behemoths are now trying to manufacture a partisan crack in what is clearly a bipartisan issue with the hope that they can co-opt the regulatory process as a result.
Even more troubling than watered-down privacy legislation that creates an appearance of accountability and enforcement is the possibility that regulations will create significant new compliance costs. This would allow these data oligopolies to further entrench their dominance against startups and new entrants to the market. Facebook can simply absorb the new costs, while smaller companies and startups, many of whom are starting to emerge with innovative tools to empower people with their data, would be boxed out.
The other focal point of their strategy is to deny state-level activism and innovation. This strategy started last September, when the U.S. Chamber of Commerce and the Internet Association released ten new “privacy principles.” The very first of these principles, which has been echoed nonstop since, was to call for “a single federal privacy law.” While it’s reasonable to demand a unified national approach to privacy, it is far too early to defang state-based privacy laws such as the one in California, which the industry fought before losing handily in a public referendum.
California has, in fact, been a national leader on privacy issues, and their popular law goes a long way toward returning control to consumers. It takes the practices of big companies out of the shadows, which is the first step toward empowerment. It isn’t perfect, but it marked an important and awakened many to the abuses and dangers of the current “click here so we can own your data” model.
Under the newly elected Democratic Gov. Gavin Newsom, California appears poised to continue to lead on this issue. I don’t think anyone has fully processed the significance of Newsom’s calls for a “digital dividend” in his State of the State address earlier this month. This was one of the first real acknowledgments from a major public figure that trillions of dollars in wealth is being created by companies from users’ personal data, and that those same users have a right to a piece of that pie and to ensure that their data is being used for and not against them.
Of course companies that are making trillions of dollars off of private data are going to resist efforts to let others wrangle their cash cows. But the next time Zuckerberg or another Silicon Valley executive is hauled before Congress to defend data practices, which will likely happen in the coming months, we need to make sure we don’t fall for their shell game.
It’s long past time for power over data to be put back in the hands of the users to whom it belongs. I hope that Congress uses that principle as their starting point and their end goal. We need real action, not empty Facebook posts.
Shane Green is CEO of the private data sharing company digi.me and co-founder of UBDI, a consumer-controlled market research and data monetization community. He blogs at shanegreen.org and you can follow him on Twitter @shanegreen.
Every week brings another sign that consumers are coming around to the idea of making money off of their own data. USA Today’s Marco della Cava went in depth on the issue in a feature story today on California Governor Gavin Newsom’s call for a Digital Dividend, which you can read here.
The article commissioned original research showing that 45% of Californians already support getting “a share of profits from company use of user data.” Fully 26% were undecided and probably needed more specifics before deciding. And it’s fair to assume the 28% against the idea assumed the worst given current industry practices – that their private data would be sold to the highest bidder (which is not the case).
That’s an overwhelmingly positive response out of the gate for the California governor. The article also covers both UBDI and digi.me and the work we are doing to help make this possible. It’s an exciting time to be working on such a game changing problem.
California Governor Gavin Newsom called for a “Digital Dividend” in his State of the State address this past week. He didn’t offer specifics, saying only that he is instructing his staff to study the idea. But the point was as clear as the Silicon Valley sky – he wants California citizens to participate in the trillions of dollars of wealth being created by companies from their personal data.
I was caught by surprise by the proposal and Newsom’s use of the term Digital Dividend, but not by the idea itself. I’ve been working on this concept for a decade, and have been recently calling it Universal Basic Data Income – the part of UBI derived from one’s own data. I’m currently working on a startup called UBDI that is trying to prove that people can ethically and sustainably earn hundreds and then thousands of dollars a year from their data (built on the digi.me Private Sharing platform).
I love the concept of a Digital Dividend, and the precedents it evokes such as the oil dividend in Alaska. Gov. Newsom’s proposal is a critical development in this movement. No US political leader of his magnitude, much less the leader of the state with the most wealth creation from personal data, has made such a bold declaration.
I spoke to AdWeek about his proposal this week and why this is different from all the other calls for better privacy laws. The idea was so unexpected that the usual industry advocates like the US Chamber of Commerce and the Internet Association haven’t even responded. I’m not sure they know what to make of the idea. Maybe that’s a good thing. I’d hate to have to explain why people should keep being cut out of their fair share of the mega profits derived from the data they produce.
This week’s Time magazine cover feature on privacy, data and Facebook marks another milestone on the path to a new, fairer more transparent model. Marc Benioff, founder of Salesforce and new owner of Time, wasted no time in shining a light on this critical subject.
The column by Tim Cook is the biggest line drawn in the sand yet by Cook and Apple, who are declaring war on the surveillance economy that online advertising requires. It also strikes at the heart of two of their biggest competitors – Facebook and Google.
In addition to supporting a call for new privacy laws, Cook writes:
“But laws alone aren’t enough to ensure that individuals can make use of their privacy rights. We also need to give people tools that they can use to take action.”
Roger McNamee, an early investor in Facebook and mentor to Mark Zuckerberg, writes an even more damning piece about his difficult decision to call out Facebook executives and ask for them to be held accountable. The article (and his book Zucked) reads like a Silicon Valley version of Frankenstein.
“When I sent that email to Zuck and Sheryl, I assumed that Facebook was a victim. What I learned in the months that followed–about the 2016 election, about the spread of Brexit lies, about data on users being sold to other groups–shocked and disappointed me. It took me a very long time to accept that success had blinded Zuck and Sheryl to the consequences of their actions.”
If Acxiom getting religion on privacy sounds unlikely to you, you aren’t alone. In fact, I’m deeply concerned about companies like them trying to co-opt potential privacy legislation in the United States to both protect themselves and to block innovative privacy models like ours at digi.me, as I discussed with AdWeek just yesterday.
I have personally asked Acxiom many times, including directly to their board of directors, to make a downloadable copy of their digital profile data available to consumers. GDPR in Europe now requires it, and it’s called data portability. The answer has always been no.
If Acxiom wants to prove they are on the digital road to Damascus, they should make their data available to consumers. Every consumer could download a complete, reusable copy of the data Acxiom has about them – thousands of detailed data points.
At digi.me, we have the proven tools to let consumers download exactly this kind of data securely and privately – and to use however they choose (we don’t touch, hold or see data). We’ll do all the work, and won’t even charge for it.
Acxiom, it’s never been easier to prove that you’ve changed.
The following Digital Bill of Rights was crowdsourced at SXSW in Austin, TX on March 11, 2012 at a session I led with Anne Bezancon (then CEO of Placecast, now part of Ericsson) called “We the People: Creating a Consumer’s Bill of Rights.” It seems like a timely reminder that many of the current issues we are struggling with in terms of privacy, transparency and control of data are far from new, and that the issues they touch in our lives are as fundamental and transcendent as those covered in the original Bill of Rights.
The packed session at SXSW included participants ranging from privacy experts to advertising and internet executives. Despite the different viewpoints, we concluded that we could not rely on companies or governments to determine these right for us any more than the Founding Fathers relied on King George or the British East India Company to do so on their behalf. The attempt to make them go viral online fell short…at least to date.
The group also believed the rights to be so interconnected that they needed to be considered together – each reinforcing and providing context for the other. The rights do not cover each and every right or code of conduct that we believed should exist, but were designed to be a minimum set of rights that would create a the basis for a safer, fairer and more innovative digital world.
Finally, like all rights, we anticipated that there would be occasions and contexts where such rights might be limited or waived. But we asked ourselves in selecting each of them if we wanted a world where such rights did not exist and were not the default: Where there was no right to transparency, no right to privacy, no right to choice and control, etc.? Our answer was unequivocally no.
Digital Bill of Rights
March 11, 2012 – Austin, TX
This Digital Bill of Rights applies to the sanctity of the digital self
The digital self should be afforded equal standing as the physical self before the law and society
1. Right to transparency
I have the right to know who collects, uses, shares, or monetizes my data and how they do so
I have the right to know how my data is protected and secured
I have the right to know the value of my data
2. Right to privacy
I have the right to privacy by default
3. Right to choice and control
I have the right to give and withdraw permission to collect, use, share or monetize my data
I have the right to view, access, correct, edit, verify, export and delete my data
I have the right to own and/or use freely the “golden copy” of my data
I have the right to buy the product or app and not “be the product”
4. Right to safety
I have the right to expect my data to be stored and transported securely
5. Right to identity
I have the right to have different personas in context
I have the right to anonymity
6. Right to minimal use
I have the right to have my data collected, used, shared or monetized only for the specified purpose and context
I have the right to be forgotten after my data has served its purpose
In our journey at digi.me to create compelling reasons and tools for consumers to take control of their data, the new TFP app stands apart. The app, now available for iOS in the App Store and Android in Google Play, allows you to privately scan a lifetime of social posts to find potentially vulgar or objectionable content.
I’m not sure what I can add to this great write up by the Daily Mirror’s Ian Morris, particularly if you are looking for a job – or trying to keep one you already have (including, say, hosting the Oscars):
“Christmas party season is stalking you like a lion pursues an antelope, waiting for you to have one too many glasses of vino and vomit up the veritas all over social media.
But a new app promises to wipe up your social media mess, and might help you stay gainfully employed into 2019. Called “That F***ing Post” it hunts through your accounts looking for things you shouldn’t have said.
The app says it can go back to the start of many social media accounts, perhaps tracking down faux pas from years ago. Handy if you wrote things during the throws of youthful indiscretion but now want a paying job.”
The app is a must for just about anyone who’s spent more than 10 minutes posting on social media, but especially for younger people who grew up posting their every thought (or bad idea).
TFP, which stands for That F’ing Post, is built with digi.me’s private sharing technology, and scans posts and comments from Facebook, Instagram, Twitter, and Pinterest. A simple work flow lets you swipe left to ignore a post or swipe right if you’d like to go back and edit or delete.
The app combined 8 libraries of bad terms and phrases to enable its machine learning, which happens inside the app without ever going to external servers (true edge processing).
That said, it has a lot to learn. Lots of words like “sex” or “shoot” have plenty of fine uses, while other words and phrases escape its digital net. We are encouraging users to send ideas for new word and phrases to add to the library by using #TFP. Check it out and let me know what you think!