![]() I will keep this short; in order not to rant too much. The sole purpose of this post is express my disappointment in things Jamaican. Politicians who do what they decide they want to do - irrespective of the will of the people, irrespective of the damage and chaos that their actions will cause; irrespective of the generational hurdles that they are putting up for the nation's children (their kids are insulated) - is not what we are sold as people who elect "representatives" to advocate for our collective best interest. There is no transparency in this Constitutional Monarchy that Jamaica resides in. Are Jamaicans fully aware of the holdings, affiliations and (business) interests of their elected politicians? Are Jamaicans fully confident that their leaders are doing what is right for them and not what is right for their own concerns? Where is the accountability necessary to ensure that "elected service" is exactly that? Service to the people. Has it all been tainted? The Root Of It All - Corruption I know you have heard this sankey before - over and over again - beaten to death - to the point where it does not phase you, it does not register, and everyone assumes that it is a part of life. It is should never be a "natural" part of anything. It is a bastardization of the system that the Jamaican public is sold as democratic. Corruption is formally defined as "the use of public office for private gain". I am not just talking about your grand-daddy's corruption here. A bribe here or there to clear a barrel from the wharf. A likkle pocket money to make some government process go faster or smoother. All of these are well-understood and rightly identified as corrupt activities. I am talking about more damaging and long-lasting acts. Things like creating financial avenues that "seemingly" addressing public problems, while profiting from it in the back-end. Things like manipulating financial markets, misinforming the Jamaican people and ensuring that your actions lead to profits for you and yours. I am talking about systemic games that hurt millions of people for very long periods of time, but that are not illuminated and visible to the public (till the worse happens). Can anyone in public office in Jamaica honestly and publicly say that you are not corrupt (by the generally accepted definition above)? Everyone is at fault. The Jamaican Media The media is supposed to be one of the first countermeasures in our system - holding public officials, who have been entrusted with the keys to the country, accountable. Where is the media on matters that are important to the present and future? Where is the Jamaican media when it comes to investigative journalism? Where is the media when it comes to being the objective arbiters of truth and what is good and right for Jamaicans? Where is the media when it comes to representing the common youth and ensuring that their future will be better? Crickets everywhere. Unless you like sensational, surface-level discussions about the inconsequential. Where is the media when it comes to shining a light on corruption and ensuring that Jamaica lives up to its promise? When did journalism die in Jamaica? When did the media give over its power to the elites and become their lapdogs? When did this critical check-and-balance in the Jamaica governance process become ineffective? Irregardless of the answers, it is time for a resurgence. Time for the media to start fulfilling their extremely important mission. However, there are a lot of parties involved; not just the media. The Jamaican People Light bills are now taxed. The Road Traffic Act is in full effect. National Housing Trust !!!!!! The Cybercrime Act of 2015 seeks to make all Jamaicans potential criminals for just interacting and expressing themselves online. The wholesale privatization of public Jamaica assets continues full steam ahead. The Jamaican government still prioritizes international loan debt servicing over the needs and growth of its people and the local economy. And what has been the reaction of the public? A few rumblings here today. A few rumblings there tomorrow. Nothing after a week. Back to normal and their voice counts for nothing. A horrible example of representational politics if I ever saw one. The "Noise. Noise. Block Road" strategy has been "best in breed" practice in Jamaica since time immemorial. It has also been wholly useless and ineffective for just as long. How about the people trying different techniques to hold government and media accountable? What about hitting them where it hurts? What about investigating the behind-the-scenes deals that are really driving these initiatives? What about naming and shaming the people involved? What about finding the by-laws and statutes that can empower you to call people (representatives or other) on their bullshit? What about real "grassroot organization" to educate, uplift and empower voters to be accountability barometers for the people they elect? What about trying something different, something new, something unexpected, that the people who have the public trust would never expect? Honestly, I don't know what will work, but I know a few million of us can come together and try a whole heap a ting. The Jamaican Intelligentsia At this point, I know I have lost a lot of you and that is fine. I know I promised to be short and I think I am failing miserably at that too. Okay. However, two more things before I sign off. The most egregious of all the phenomena in the Jamaican ecosystem to me is the people that know better, but are just complacent and resigned to the status quo. To whom much is given, much is expected. It is through the shared sacrifice and labor of every single Jamaican that the Intelligentsia was and is able to get to where they are, whether they are still in Jamaica or in the diaspora. Sure, you worked hard individually. Sure, you had the love and support of your family. However, if you did not have the infrastructure, name-recognition and prestige that comes from all of us, even the ones you won't publicly embrace and or acknowledge, working together and making Jamaica the international power house that it is, then you would not be where you are today. Let that sink in. Now, think about how your complacency is helping make this ship sink faster. Then think on how you can help fix the problem and do something. The Effects of Unaccountability: Apathy and Powerlessness I have the luxury of being an observer and an insider at the same time. I am a Jamaican, who like many others, capitalized on education to increase the number of opportunities available. I am forever grateful of my heritage and know that it is my responsibility to give back in whatever means I can (whenever I can) This is why when it comes to matters related to Information Technology and Jamaica, I provide help (if asked for or not). A few days ago, I provided my thoughts on the disastrous impact of the Jamaican Cybercrime Act of 2015 (click here for that post). This advice was unsolicited. I am anti-politics and pro-people. The reaction that I received from the blog post indicated that there is apathy and powerlessness in all segments of Jamaica when it comes to public policy. "This does not apply to me" "There is nothing I can do about it" "The politicians will do whatever they want, no matter what I say" These were sample responses. It broke my heart. If you cannot affect your future, then who can? "If it's to be, It's up to me" - Source Unknown
5 Comments
Sweet Baby Jesus!!!!! I spent the last few hours reading the 2015 Jamaican Cybercrime Act. Though it is a relatively easy (36-page) read, let me spare you the trouble of wading through the legalese and mis-spellings. The Cybercrime Act of 2015 seeks to address:
Additionally, the Act specifies legislation related to protected computers (Section 11), rules on inciting cybercrime (Section 12), and guidance on hindering or prejudicing cybercrime investigations (Section 13). In an effort to include everyone in the fun, Section 14 addresses offences by corporate bodies. Further, the Act outlines actions that someone that is harmed by cybercrime (corporate body or individual) can take to get compensation from their "victimizer" or "offender" (Section 15). At this point, you are saying to yourself "Sounds good to me. What is your problem, Ty?" As usual, the devil is in the details. I won't spend this post providing a sentence-by-sentence review of the Act (like I did two years ago when the Cybercrime Act of 2010 was under review. Those details are here). For that detailed review, I am available for consulting via my security firm. In this blog, I will only highlight the most glaring and mind-boggling concerns. Lack of Awareness of the IT Security Profession and Education Let me start off with the basics. Sections 5 and 6 demonstrate a marked lack of understanding of the field of computer security and the fundamentals of training computer security professionals. System administrators who install patches for zero-day exploits are normally warned that the patches may have unforeseen and untested impact on the rest of their ecosystem, which is typical of the field. Under these sections of the Jamaican Cybercrime Act of 2015, any system administrator who performs a security update is potentially in breach of the Act. Another example is that of a system administrator, security professional or academic who needs to listen to and gather network traffic to detect security attacks; in order to spot and respond to these attacks and secure their systems. Under the current legislation, they could face prosecution. Not to mention the fact that teaching the next generation of security experts becomes untenable in Jamaica under this Act; for fear of prosecution. All in all, a bone-headed move if one wants to foster secure and private systems in Jamaica. Or maybe I got this all wrong and these exceptions will be covered under an amendment of the Interception of Communications Act? Nuh Run Nuh More Joke Roun Ya The next point is so frustrating that I have to quote directly from the Act. A person commits an offence if that person - (a) uses a computer to send to another person any data (whether in the form of a message or otherwise) that is obscene, constitutes a threat, or is menacing in nature; and (b) intends to cause, or is reckless as to whether the sending of the data causes, annoyance, inconvenience, distress, or anxiety, to that person or any other person. An offence is committed under subsection ( 1) regardless of whether the actual recipient of the data is or is not the person to whom the offender intended the data to be sent. So, you are telling me that any politician or (rich) Jamaican who receives a text, email or other commnicae that they can interpret as threatening, obscene or menacing, may sue under this new Act (whether the message was intended for them or not). Goodbye freedom of expression. Goodbye, joking around (or ramping) with a friend in what may be subjectively interpreted as negative. Wow!!!!!!!! I am hoping that the intent of the Law, possibly cyberbullying or spam of online porn, etc, is different from the letter of the Law. Right now, a lot of people are going to be in trouble. This could also be a very effective way of shutting down a rival, whether political, business-related or other. Everyone Knows What a Protected Computer Is Section 11 mentions a "protected computer" and assumes that a reasonable person should know what a protected computer is. Unfortunately, this is a highly subjective call that requires a judge to know the thoughts and mindset of an alleged offender. Without having computers clearly defined and labelled as protected computers, this section is open to manipulation from the owners of computer systems that may argue (and defend) the "protected computer" status of their systems. Overall, a horrible way to craft Law. Where are the 'agreed upon" standards? What is universally understood? Is there a definition of "Protected" that is clear to everyone? Is there a "Data Protection Act"? Hmmmmm.... Plain Stupidity From section 10 onwards, it gets progressively worse, because the rules build upon the previous sections, which we have already gone through and declared as bone-headed. Section 12 states that if you and your friend are running a joke on another friend and it mistakenly gets to the wrong person, then that person can charge both of you under this Act. We all know what happens when you build a house on sand. *Shaking my head* Protect The Lawyers Section 13 is the only section where there is an explicit call-out for what it means to "not commit an offence". Of course, it stipulates the cases where lawyers are not liable or covered under this Act. Interesting!!!!!! Why wasn't there a call-out for IT security professionals and academics in previous sections? All a Unnu is Fi Wi This final point is what infuriates me most. From the Act: 22.-( 1) This Act applies in respect of conduct occurring (a) wholly or partly in Jamaica; (b) wholly or partly on board a Jamaican ship or Jamaican aircraft; (c) wholly outside of Jamaica and attributable to a Jamaican national; or (d) wholly outside of Jamaica, if the conduct affects a computer or data- (i) wholly or partly in Jamaica; or (ii) wholly or partly on board a Jamaican ship or Jamaican aircraft. Translation: If you are Jamaican or if you are accessing "stuff" in Jamaica, it does not matter where in the world you are, you are governed by this Cybercrime Act. I leave you to think through the impact of this. Spoiler Alert: All Jamaicans wherever you are, you are screwed. Conclusion I am extremely disappointed in Minister Paulwell and his team.
You can do better. The Jamaican people deserve better. All you have to do is to include a Computer Science professional in the drafting of Acts like these to advice you on the feasibility of these rules. Or maybe you want this Act exactly as it is. Readers, what are your thoughts? The National Day of Civic Hacking (NDCH) started in 2013. Its mission is to engage the community in solving civic issues. The event brings together urbanists, civic hackers, government staff, developers, designers, community organizers and anyone with the passion to make their city better. They will collaboratively build new solutions using publicly-released data, technology, and design processes to improve our communities and the governments that serve them. Anyone can participate; you don’t have to be an expert in technology, you just have to care about your neighborhood and community. On June 6, 2015, thousands of people from across the United States will come together for National Day of Civic Hacking. The U.S. Census Bureau, U.S. Small Business Administration, BusinessUSA and the U.S. Department of Labor have joined forces to host an event in support of the National Day of Civic Hacking in Washington DC.
The goal is to make open data more useful to small business leaders in your local community. This Saturday, stop by and help support the backbone to the American economy. More details here. Source: O*NET Team. May 4th, 2015 O*NET (Occupational Information Network) is the definitive list of occupations in the world.
It is common folklore, that "on average, O*NET updates only around 100 occupation codes annually". The innate researcher in me was always curious about this. Luckily, this was clarified for me early last week in a meeting with the O*NET team. It turns out that this misconception is a product of horribly-handled communication and the need to simplify what is a complex issue. The team produced the chart above, which documents the updates to O*NET occupations since 2003. Light blue represents the number of occupations "comprehensively" updated by the O*NET survey. "Comprehensive" means that all the main components of the occupational profile were updated. Dark blue represents the number of occupations that had some element of their O*NET information updated. These changes come from sources other than the survey, e.g. analyst ratings, customer & professional association input, government programs, transactional data, and Web research. In 2010, 123 occupations were updated via surveys, while the remaining 874 occupational profiles of that year's total (997) had profile elements updated via non-survey methods. So, all the occupation descriptions are updated every year. Interesting. What are your thoughts? Many, many years ago, I read The Tools - by Phil Stutz and Barry Michels - and ever so often I reflect on the concepts they presented. One of the principles they presented that stuck with me is that of Jeopardy. Not the well-known American game show; but going back to the word's traditional interpretation. The Meaning If you look in any dictionary, jeopardy is defined in one form or the other as "the danger of loss, harm, or failure". Your job is in jeopardy. Your life is in jeopardy. Your freedom is in jeopardy. We should all be familiar with this. The Twist Phil and Barry framed the idea of jeopardy in an interesting context. Imagine that you are looking at yourself as you lie peacefully in a bed as you take your last remaining breaths on Earth. Are there things that you wished you would have done? Skills that you have not used? Passions you have not let surface? If so, then use this as motivation - to ensure that you maximize every single second - to be the most authentic version of YOU possible. The idea is that when you are take your last breath, you want to have no regrets. You want to feel complete. The reality is that we all have a limited another of time in this form; and a lot of gifts and talents. Jeopardy forces us to look carefully at ourselves and ask "Am I using my gifts and talents effectively?" Act Use this feeling of jeopardy to appreciate the urgency of life - the urgency of your decisions, the urgency of what you are doing now, the urgency of what you could be doing.
Use this feeling of jeopardy to ensure that you are doing the best thing possible - right now - that maximizes you and that makes you happy. The next second could be your last. What will you tell the YOU standing over yourself as you go? How many regrets will you have? On Jeopardy day ...... Jeffrey Chen, a fellow Presidential Innovation Fellow, R expert, and all-round amazing human being, took a few hours out of his spare time and created a cool visualization of the diversity data of the CENSUS.
Is anyone surprised that?
Yes, I am aware that the Census has an interactive map.
However, this very focused visualization tells a specific story and highlights what most are afraid to confront. What is your take on the data? I am going to make this short, because Robert Reich has already said what needs to be said in this space (here). My role here is to punctuate the obvious. Let's start with what people believe. The "Free Market" The formal definition of a free market goes something like this: A market system in which the prices for goods and services are set freely by consent between sellers and consumers, in which the laws and forces of supply and demand are free from any intervention by a government, price-setting monopoly, or other authority. A free market economy is a market-based economy in which prices (for goods and services) are set freely by the forces of supply and demand and gravitate to their point of equilibrium without intervention by government policy. It typically entails support for highly competitive markets and private ownership of productive enterprises. Free markets are also often associated with capitalism. However, the concept has been championed by anarchists, advocates of cooperatives and socialists. Sounds Good To Me. (What is Your Problem, Buddy?) I agree. It sounds wonderful and fanciful. And it is exactly that. Apart from the assumptions that are made in the definition of free market, which do not exist in any country on Earth, people in societies that embrace free market capitalism also assume that the "free market" is natural, not man-made, inevitable and not influenced by the government. This means that the outcomes of the "free market", i.e. inequality, poverty, insecurity, famine, war, etc, is beyond our control. This is all fiction. The Reality At the core, the “free market” is a set of rules that specify:
The rules defined above are a mere subset of the complete set that go into creating a "free market". These rules do not exist in nature. They are creations of the minds of men (and women). Markets are not “free” of rules. The rules define them. As citizens of a democratic society, we have a say in the "free market". We help define the rules in a number of ways, including voting governments into power, Government help to organize and maintain the rules that create the "free market". Let me repeat this for emphasis: Governments don’t “intrude” on free markets. They facilitate free markets. But? But? But? But NOTHING.
Ask yourself why there is no longer a (public) market for African slaves, why young children are not delicacies at your favorite restaurant, why urine-inspired beverages are not all the rage at your local watering spot or why your water supply is not filled with faeces and bacteria. Now, own your part in creating and evolving our "free market". Now, realize that you are responsible for the consequences of the "free market" (as it exists today). Yes, you share responsibility for inequality, poverty, crime, homelessness, etc. Let that sink in. Now, do something positive about it. Over the last few weeks, I had the pleasure of chatting with Aneesh Chopra - the first Chief Technology Officer of the United States of America - on a number of topics related to Skills, Workforce and Innovation. One of our earlier conversations included the state of job postings online. Aneesh energetically (and nonchalantly) observed that it was all a house of lies - where job matching and or job scraping company after company would forage for as many job postings as possible, ingest them into their own (proprietary) systems, and create business value from them. This got me thinking. Are companies who are advertising jobs in their firms aware that:
In 2011, Citi estimated the online recruitment market size to be $3 billion globally. In November of 2013, Staffing Industry predicted the global staffing market to exceed $422 billion by the end of 2013, with 30 percent ($12.6 billion) coming from the U.S. So, I am guessing that a lot of companies are either unaware of (or unable to do anything about) the widespread theft and misappropriation of job listing data. Why? Because for a market of that size, firms that are aware of this would be constantly in litigation to maintain their data ownership rights and keep their IP. What I find even more fascinating is the slippery slope (of admitting that data ownership matters).
Once firms start taking this seriously, will they then have to admit that they are stewards for the data they receive from their clients/users (and not data owners in that context)? What do you think? ![]() I had the pleasure of meeting FCC Commissioner Ajit Pai at a dinner reception a few weeks ago at the Residence of the Swedish Ambassador. He struck me as a personable, funny, intelligent, and kind (family) man, who recognized the need for innovation and was not afraid to leverage other people with expertise in areas that complement his own expertise. He mentioned the impending vote on Net Neutrality and mentioned the raised levels of interest from everyone about the topic. He seemed excited, eager and earnest about voting on the issue and doing the right thing. When You Assume Now imagine my shock when the decision came down and I learned that he voted against the measure. (You can read his full dissent here and a summary of it here). I thought to myself: "What could have led him to this decision?" I read his full dissent statement (and the summary) multiple times to try to figure out how someone so smart could come to a No on this initiative. Were there details, i.e. fine-print, in the proposal that I (and the public) were not being told about? I struggled with this for a long while and debated if I should say anything at all. Unfortunately, my curiosity could not let me rest. Just to ensure that everyone is on the same page and that we are all talking the same language, let me start with the basics. Back to Basics What is net neutrality? It is the principle that Internet service providers, specifically cable companies, should enable access to all content and applications regardless of the source, and without favoring or blocking particular products or websites. But? But? But? Isn't this how the Internet has always been? Yes, you are correct. We always had net neutrality ....... until some companies decided that it would be more profitable to have different levels of service that they could charge for; where some traffic would be fast and others not-so-fast. I won't re-hash the entire story in this blog. The interested reader can get more here. But? How could companies do this? Legally, there was no one to stop them. Broadband was not regulated under the same rules as the telephone lines (which the Federal Communications Commission (FCC) deemed had to remain open). What Does the FCC decision mean? The gist of what the FCC did in the last week of February 2015 was to finally side with the American people. (The story is more complicated, as all current relationships, with the FCC being against Net Neutrality before they were for it) The FCC said "America you can continue to have net neutrality". However, they said it in a way that killed all the legal ambiguity surrounding the issue. Now, companies know that if they try to play favorites or to monetize the Internet wires that there will be penalties to pay. So now you are all caught up. So how could anyone disagree with this? The Dissent Ajit's dissent starts off with a single true sentence and then immediately pivots: For twenty years, there’s been a bipartisan consensus in favor of a free and open Internet—one unfettered by government regulation. So why is the FCC turning its back on Internet freedom? It is flip-flopping for one reason and one reason alone. President Obama told it to do so It is true that a free and open Internet has been a bipartisan issue (or rather a non-issue). It is true that it has been unfettered by regulation. It is also true that until recently, the broadband providers did not realize the transformative nature of the Internet and were not innovative enough to capitalize on the ecosystem that grew on top of their infrastructure. After the success of a few Internet companies, like Google and Facebook, broadband company executives sought a way to get a piece of their revenues. After all, these companies were using their wires for very little money (relatively) and making huge profits. Where was their piece? This led to "The Great Flexing" (trademark pending) - the golden age where broadband companies throttled service and started shaking down Internet companies. The lack of regulation on broadband played to their favor. So, the FCC decision is not turning their back on Internet freedom, but turning their back on greedy cable executives. Ajit's dissent goes on to say: The Commission’s decision to adopt President Obama’s plan marks a monumental shift toward government control of the Internet. It gives the FCC the power to micromanage virtually every aspect of how the Internet works. It’s an overreach that will let a Washington bureaucracy, and not the American people, decide the future of the online world. The decision gives the FCC the authority to treat broadband as it does the telephone wires. If Ajit is arguing that this translates into giving the government the ability to control every aspect of the Internet, then he should provide evidence of how the government has manipulated every single aspect of the telephone industry. Again, the FCC is saying treat broadband like the telephone. So, there should be plenty of evidence of the government micromanaging the telephone market (if the dissent is correct). His dissent goes on: One facet of that control is rate regulation. For the first time, the FCC will regulate the rates that Internet service providers may charge and will set a price of zero for certain commercial agreements. The Commission can also outlaw pro-consumer service plans. If you like your current service plan, you should be able to keep your current service plan. The FCC shouldn’t take it away from you. Consumers should expect their broadband bills to go up. The plan explicitly opens the door to billions of dollars in new taxes on broadband. One estimate puts the total at $11 billion a year. Consumers’ broadband speeds will be slower. Compare the broadband market in the U.S.to that in Europe, where broadband is generally regulated as a public utility. Today, 82% of Americans have access to 25 Mbps broadband speeds. Only 54% of Europeans do. Moreover, in the U.S., average mobile speeds are 30% faster than they are in Western Europe. This plan will reduce competition and drive smaller broadband providers out of business. That’s why the plan is opposed by the country’s smallest private competitors and many municipal broadband providers. Monopoly rules from a monopoly era will move us toward a monopoly Unfortunately, there has been a living lab (of sorts) for well over a decade, where some countries have opened up their Internet (with similar legislative tools just used by the FCC) and American, which has remain closed. The results for those other countries: lower broadband rates, improved service and service options, lower bills (compared to the US) for better service, faster broadband speeds, and more competition in the Internet Service Provider business. All of which are the complete opposite of what Commissioner Pai's dissent claims. The evidence shows that the dissent's assertions are not grounded in reality. Larry Lessig's talk (in 2010) about American's broadband problem (below) outlines the approaches taken by the American government and the European Union and the impact they have in each environment. You have to watch it. Broadband is cheaper, faster and better in Europe than in the United States. All because Europe did what the FCC just recently did decades ago. Ajit's dissent continues: The Internet is not broken. We do not need President Obama’s plan to “fix it.” The plan in front of us today was not formulated at the FCC through a transparent notice-and-comment rulemaking process. As The Wall Street Journal reports, it was developed through “an unusual, secretive effort inside the White House.” Indeed, White House officials, according to the Journal, functioned as a “parallel version of the FCC.” Their work led to the President’s announcement in November of his plan for Internet regulation, a plan which “blindsided” the FCC and “swept aside. . . months of work by [Chairman] Wheeler toward a compromise.” The plan has glaring legal flaws that are sure to keep the Commission mired in litigation for a long, long time. I agree that the Internet is not broken. However, the ecosystem required for its continued growth was and that is what the FCC addressed. I have no insight on the rulemaking process, so I cannot comment on that part of the dissent. I also agree that the current set of cable companies will continue to sue in order to get their way. However, I think their time will be better spent finding new ways to partner and collaborate to create additional revenue streams; rather than trying to tax the existing successful players and increase the barrier to entry for emerging startups. All that being said, I am not a lawyer nor claim to be one. However, I do know technology. I am an expert and a student. In my mind, I want to think that Ajit had excellent technical guidance when forming his position on this issue. However, I am reminded that different generations have different views and levels of understanding on the issue. Hey! Old-school civil rights organizations got it wrong too. More here. What do you think? Let me just say upfront that I personally think that the term sharing economy is a bunch of marketing and PR bullshit. My more diplomatic self would say that the sharing economy (as the media tells it) is a bastard misrepresentation and it will have consequences that will be written off as unintended in the future, but that are presently known by the current architects of this economy. The sharing economy (aka the peer-to-peer, mesh, or collaborative economy; also called collaborative consumption) is formally described as a socioeconomic system built around the sharing of human and physical resources. It includes the shared creation, production, distribution, trade and consumption of goods and services by different people and organizations. These systems seek to empower individuals, corporations, non-profits and government with information that enables distribution, sharing and reuse of excess capacity in goods and services. A common premise is that when information about goods is shared, the value of those goods may increase, for the business, for individuals, and for the community. Unfortunately, this vision of a sharing economy and the current companies that are sold as embodying this economy are at odds with each other. Businesses, like AirBNB, TaskRabbit and Uber, are often touted as the success stories of the sharing economy - unleashing the potential of the ordinary citizen to help his or her fellow citizen (for financial gain). However, the motives of these businesses are very clear - how to capitalize on cheap (if not free) resources, i.e. humans, their time and their possessions, to make a profit by decentralizing (or deregulating or disrupting) an existing sub-optimal industry or industry sector. The idea is simple and brilliant.
I call this a Gig Economy. Everyone has a gig. It is a new class of service industry where workers are expected to operate like micro-businesses with full risk, no union representation and very little safeguards. The Gig Economy (or the Share-The-Scraps economy, as Robert Reich calls it) is far different from the Sharing Economy and it is evident to me that one is playing off the goodwill and promise of the other to move itself forward. If I can see it, then others can too. The first interesting phenomena is that this economy is projected to be at least twenty times bigger than it currently is within the next decade; settling in at revenue growth around $355 billion dollars. And though there is discrimination in this gig-gy environment (see here), minorities are still drawn to it for several reasons.
They are drawn to it as service consumers, because it offers a higher level of service that they were receiving in the industry before. Ask a black man having a night out in Washington DC to compare his "taxi" experience now versus when he did the same thing in 2007. I guarantee you that the majority of responses would indicate that the presence of Uber and Lyft has given him a lot of freedom and reduced his stress level. Minorities are drawn to this Gig economy as service providers, because it is an avenue that offers less institutional roadblocks to entry. A hispanic woman from the projects with few options and lots of ambition can easily find a well-paying gig that would help her tremendously in getting a better life. Is this good or bad? I leave that up to you to decide. I just want us to all be fully aware of what is happening. Eyes Wide Open. What are your thoughts? ![]() When the Employment and Training Administration’s CareerOneStop team set out to redesign portions of the site (namely, career, training, and job resources), they didn’t immediately begin rewriting code. Instead, they embraced a user-centered approach that focused on the user experience (UX). In a general sense, focusing on UX means taking a step back to learn about users’ core needs and preferences before making changes to a product or service. Before the CareerOneStop team completed their site redesign, they asked users the following questions: “Who’s using CareerOneStop resources?” This was the first question the team asked the people they interviewed. The answer? Just about everybody. CareerOneStop users include job seekers, businesses, students, current workers, laid-off workers, veterans, workers with disabilities, workers with criminal records, career counselors and other workforce professionals, and just about every other member of the public. Researchers were pleased to learn that such diverse audiences use CareerOneStop, and they talked to a varied group of users to answer the next questions. “Can users find what they need at CareerOneStop?” CareerOneStop offers such a large volume of information that some users weren’t able to quickly find the best resources for their unique needs. The research team talked to users and watched them use the system to find out what each type of user needs most (and most quickly), where they expect to find it, and what language is most meaningful to them. The researchers’ goal was to identify the clearest labels and determine the best way to organize the site’s information to help users connect with the most relevant information. CareerOneStop now offers streamlined access to targeted resources for each audience. “Are CareerOneStop’s tools easy to use?” CareerOneStop’s resources can’t only be easy to find—they also need to be easy-to-use and effective at helping users meet their career, training, and employment goals. The CareerOneStop team conducted usability testing on key tools and websites, during which they watched users interact with the site and learned how to improve functionality, organization, and language in order to better meet users’ needs. “How are users accessing CareerOneStop resources?” While some people are smartphone-wired 24/7 (one recent survey found that 83 percent of people use smartphones or tablets to job search), others may lack dependable Internet service on a daily basis. CareerOneStop’s goal is to make its resources valuable for all users. That’s why both the redesigned CareerOneStop.org site and the newly launched Credentials Center are mobile-friendly—that is, they automatically adjust to a user’s smartphone, tablet, or desktop screen, providing on-the-go employment, training, and job search help. Six key CareerOneStop tools—including Job Search, Training Finder, and Salary Finder—are also available as mobile web apps. While making its resources accessible to mobile-equipped users, CareerOneStop didn’t want to leave behind those with limited Internet access or low computer literacy skills. For those who may access the Web at a public library or American Job Center, CareerOneStop provides printable guides, along with the ability to easily download and print key information and tool results. And for those who are less comfortable with technology, printed and video help materials provide step-by-step guidance through many tools. What’s next for CareerOneStop? The recently redesigned CareerOneStop and the new Credentials Center offer a wealth of assistance to anyone with career, training, or employment needs. Read more about key features in the press release or watch What can CareerOneStop do for you? But user-centered development doesn’t end with the launch of new products. The CareerOneStop team will continue collecting and learning from user feedback to continually improve its resources. Send us your feedback at [email protected]. This is the first draft of this Department of Labor blog post
These are exciting times for tech in the Federal government. DJ Patil has joined the administration as the first ever US Chief Data Scientist, Megan Smith looking for more technologists to join the United States Digital Services, and the Presidential Innovation Fellowship program is recruiting for the next round of Fellows. The excitement is both top-down and bottom-up. There are exciting developments on the ground - percolating in the agencies. At a roundtable hosted at the John Kennedy School of Government in Boston, Massachusetts, the participants spent over three hours discussing the current state of the skills economy, its current shortcomings and the actions required to make progress in the field. The job postings of businesses, which want employees, identify the demand in the market. The resumes of job seekers represent the supply side, and the educational institutions that provide training facilitate mechanisms to meet the long-term system needs. Unfortunately, there has been at least two decades of activity in the workforce/skills field that has yielded a fragmented mesh of uninteroperable and disconnected systems and portals. To ensure that the American worker has a fighting chance in this (and the upcoming) century, there has to be several fundamental steps that must be taken; and be used as the bedrock of the industry. The first step is defining a skill. The individual components of a skill or competency need to be defined and agreed upon. The second step is articulating (and growing) the set of skill terms. This would enable a common language and reduce the possibility of a 'Tower of Babel' situation. The final step is the creation of a platform that demonstrates the value of an open API (Application Programming Interface) over a set of skills data that covers all aspects of the skills triangle. This platform is a public-private partnership that leverages Federal and business stakeholders.
** Note: The opinions expressed within this post represent only the views of the author and not any person or organization As a Presidential Innovation Fellow, I’ve had the pleasure of working with some of the most brilliant, forward-thinking minds in the workforce. During the past few months, my work has focused more specifically on skills and training arenas,and one thing that struck me is that the skills ecosystem is the entire economy. Its success will lead to a thriving and solid economic base. It’s one thing to make this realization; translating this realization into action is quite another. The problem of understanding the skills space and iterating on it to produce the system that its users‚ i.e. job seekers, employers, colleges, workforce investment boards, etc. —need is a very important one. Related, it’s one that will require the participation of many more people. Before we delve too deeply into this issue, let’s cover a few basics. The de facto source of skills information, both inside and outside the Federal government, is O*NET - the Occupational Information Network. What Is O*NET? O*NET is a data collection program that populates and maintains a current database of the detailed characteristics of workers, occupations, and skills. Currently, O*NET contains information on 974 detailed occupations. This information is gathered from a sample of national surveys of businesses and workers. For each of these 974 occupations, O*NET collects data on 250 occupational descriptors. As you might guess, O*NET is intended to be a definitive source for persistent occupation data — in other words, data on stable occupations that have existed (and will continue to exist) in the medium term. Examples of some such occupations include firefighter, teacher, and lawyer, to name a few. Fig. 1: O*NET Content Model O*NET supersedes the U.S. Department of Labor’s (DOL’s) Dictionary of Occupational Titles (DOT) and provides additional occupational requirements not available in the DOT. The DOT is no longer supported by DOL. O*NET uses an occupational taxonomy, the O*NET-SOC, which is based on the 2010 version of the Standard Occupational Classification (SOC) mandated by Office of Management and Budget (OMB) for use by all federal agencies collecting occupational and labor market information (LMI). What Is O*NET Good At? O*NET is the bedrock of occupational data — some users think of it as a one-stop shop for data on occupations that have existed (and will exist) for some time. In a phrase, it is the gold standard of data on occupational skills – data upon which laws are createdand an economic ecosystem can be built. How Often Does O*NET Data Get Updated? The O*NET data set gets updated once a year. Currently, this is the fastest that comprehensive, consistent, representative, user-centered data, which complies with OMB’s Information Quality Guidelines and data collection requirements, can be gathered and processed. How Does O*NET Collect Its Data Anyway? Data collection operations are conducted by RTI International at its Operations Center in Raleigh, North Carolina, and at its Survey Support Department, also located in Raleigh. O*NET uses a 2-stage sample — first businesses in industries that employ the type of worker are sampled and contacted — then when it is confirmed that they employ those workers and will participate — a random sample of their workers in the occupation receive the O*NET survey form and respond directly. When necessary, this method may be supplemented with a sample selected from additional sources, such as professional and trade association membership lists, resulting in a dual-frame approach. An alternative method, based on sampling from lists of identified occupation experts, is used for occupations for which the primary method is inefficient. This method is reserved for selected occupations, such as those with small employment scattered among many industries and those for which no employment data currently exist on which to base a sample, such as new and emerging occupations. At the current funding level $6.2 million for PY 14 the O*NET grantee updates slightly more than 100 occupations per year with new survey data. Why Use Sampling And Surveys? O*NET currently uses sampling and surveys to gather its massive amounts of data because these are the best techniques that also comply with the OMB Information Quality Guidelines (IQG). These guidelines identify procedures for ensuring and maximizing the quality, objectivity, utility, and integrity of Federal information before that information is distributed. OMB defines objectivity as a measure of whether disseminated information is accurate, reliable, and unbiased. Additionally, this information has to be presented in an accurate, complete manner, and it’s subject to quality control or other review measures. O*NET was designed specifically to address OMB IQG, including OMB information collection request standards, and to correct some fairly serious limitations in the Dictionary of Occupational Titles. What Are Other Foundational Principles of O*NET? In addition to serving as a comprehensive database of occupational data and promoting the integrity of said data, O*NET upholds several other principles. These include the following:
Using survey-based data enables O*NET to have comprehensive coverage, be very representative, enable the data to be validated and cleaned relatively easily, enable the statistical calculation of margin of variance or error, and to meet the OMB data collection requirements. Even though this type of data tends to be higher cost than other data types, survey-based data results in average or mean values (by design). What About Other Types of Data? There are two other primary types of data that could be included in the O*NET data set — transactional data and crowdsourced data. Transactional data tends to be privately owned and have a potential for response bias. Currently, occupational and industry coverage is limited, e.g. in certain industries, online job postings are the norm and in others, it is not. Also, the use of online job postings vary with the size of the firm — smaller employers are less likely to post job openings online. Finally, many postings don’t specify the information desired by O*NET curators. Crowdsourced data tends to the least representative of the set, has the greatest potential for response bias, can be very difficult to validate independently and can easily capture leading or emerging variations. Both transactional and crowdsourced data would require significant investments in curation and management for them to satisfy the OMB Information Quality Guidelines and the O*NET Foundational Principles, which are intended to fix the problems with the DOT. What Is The Biggest Misconception About O*NET? “Why isn’t Python an O*NET Skill?”, “Why is Active Listening an O*NET Skill?”, “Why are O*NET Skills so broad and generic?” Members of the O*NET team hear these and similar questions with relative frequency. The above questions echo one primary misconception about O*NET,, which is O*NET Skills are too general. What many users don’t realize is that O*NET skills — for example, active learning and programming — refer to cross-cutting competencies fundamental to a given occupation. More specific descriptors (such as systems, platforms, and other job-specific tools) are captured under the Tools & Technology section of an occupational profile. O*NET Skills, as perceived by most of the public, are actually a combination of what we describe in the Tools & Technology, Detailed Work Activity, Abilities, Tasks, Skills and Knowledge sections of an O*NET occupational profile. In short, Skills are just the tip of the occupational iceberg. What Are The Issues That You Hear Voiced Most About O*NET? The current set of concerns with the O*NET program include:
Official datasets from the Federal government are constructed to be a stable, steady base upon which more transitional elements can exist. This stability is designed to ensure that the American people see a consistent face when dealing with their government. For this reason, jobs that appear and disappear within the space of months or a few years, like Chief Fun Officer or Growth Hacker, are not immediately included in the O*NET database. Only occupations that have stood the test of time should be included in an authoritative source; upon which regulatory decisions are made. How Do We Solve These Issues? A public-private partnership that produces a Skills Market Platform (SMP) (Fig. 2), which is an open, dynamic, growing, standards-based, public-facing platform for skills data. Fig. 2: Reference Architecture for a Skills Market Platform (SMP) The SMP should be run by a nonprofit that will handle the data curation, quality control, analytics and the public application programming interface (API).
It is assumed that all contributions to the SMP will be tagged with skills from the Skills Data Standard (SDS) – a standard that is currently being developed. Employers can provide tagged job descriptions. Job seekers can tag their resumes using the SDS taxonomy or import their skills using OpenBadges (or similar) framework. Colleges can tag their courses with the imbued skills. Certification bodies can tag their programs with the delivered skills. The core idea behind the SMP is to conceptualize, design, and build a national skills platform — which would be populated with data over time. Federal government information and existing taxonomies would provide the foundational structure and bring data from O*NET, from the Bureau of Labor Statistics, Commerce data from the Census, the American Community Survey, and the Longitudinal Employer-Household Dynamics data sets. Other partners would be able to upload additional data—such as a college course catalog, or the competencies imparted by a specific certification—or data mined from tagged job postings or resumes. Local areas where community colleges and the workforce system are already collaborating closely with business — such that curriculum is closely tied to employer skills demand could upload information on the linkages between employer skill demand and specific competency-based education modules. Such early adopters would be able to demonstrate the utility of such a platform and build a community to further build out and participate in the SMP. The ultimate goal is for the Federal government to facilitate the creation of the innovation platform for skills; enabling any American to freely and openly leverage a common language and knowledge base. What Needs To Be Done The Skills Market Platform relies on an existing:
Ever wondered what skills a particular certification course gives you? Ever debated the true skills details of a job description? Are you passionate about skills, competencies, abilities and knowledge? Contribute to this effort. Spend some time helping us create the Skills Data Standard. Donate some programming and design cycles to implementing the Skills Network Protocol and or the Skills Innovation Layer. Help change the world as we know it. The skills market encompasses the entire economy. However, I feel like there are so many people who are not aware of the Federal sources in the skills arena. So, this blog will highlight the primary source from the Federal government, O*NET - The Occupation Information Network. Fig. 1: O*NET Content Model What Is O*NET? O*NET is a (data collection) program that populates and maintains a current database of the detailed characteristics of workers, occupations, and skills. Currently, O*NET contains information on 974 detailed occupations, which are gathered from a sample of national surveys of businesses and workers. For each occupation, O*NET collects data on 250 occupational descriptors. O*NET is intended to be a definitive source for persistent occupation data, i.e. stable occupations that have been (and will continue to be) in existence over the medium term. O*NET superseded the U.S. Department of Labor’s (DOL’s) Dictionary of Occupational Titles (DOT) and provides additional occupational requirements not available in the DOT. The DOT is no longer supported by DOL. O*NET uses an occupational taxonomy, the O*NET-SOC, which is based on the 2010 version of the Standard Occupational Classification (SOC) mandated by Office of Management and Budget (OMB) for use by all federal agencies collecting occupational and labor market information (LMI). O*NET is the bedrock of occupational data. It offers a stable foundation that contains information on jobs and skills that are not fleeting. In a phrase, it is the gold standard of data on occupational skills. Why Use Sampling and Surveys? The primary reason O*NET currently uses sampling and surveys is that these are the best techniques for maximizing the quality, objectivity, utility, and integrity of information prior to dissemination. O*NET uses the OMB Information Quality Guidelines for all Federal statistical information, which defines objectivity as a measure of whether disseminated information is: 1. Accurate 2. Reliable 3. Unbiased 4. Presented/disseminated in an accurate, clear, complete, and unbiased manner 5. Subject to extensive review and or quality control What Are Other Foundational Principles of O*NET? O*NET is designed to be:
Why Does O*NET Only Contain Survey-based Data? Using survey-based data enables O*NET to have comprehensive coverage, be very representative, enable the data to be validated and cleaned relatively easily, enable the statistical calculation of margin of variance or error, and to meet the OMB data collection requirements. What About Other Types of Data? There are two other primary types of data that could be included in the O*NET data set — transactional data and crowdsourced data. Transactional data tends to be privately owned and have a potential for response bias. Currently, occupational and industry coverage is limited, e.g. in certain industries, online job postings are the norm and in others, it is not. Also, the use of online job postings vary with the size of the firm — smaller employers are less likely to post job openings online. Finally, many postings don’t specify the information desired by O*NET curators. Crowdsourced data tends to the least representative of the set, has the greatest potential for response bias, can be very difficult to validate independently and can easily capture leading or emerging variations. Both transactional and crowdsourced data would require significant investments in curation and management for them to satisfy the OMB Information Quality Guidelines and the O*NET Foundational Principles, which are intended to fix the problems with the DOT. How Is O*NET Used? O*NET is the common language and framework that facilitates communication about industry skill needs among business, education, and the workforce investment system. The O*NET data is used to:
The O*NET database and companion O*NET Career Exploration Tools are also used by many private companies and public organizations to tailor applications to their needs and those of their customers. How Do I Access O*NET Data? The O*NET database is provided free of charge to the public through O*NET OnLine. O*NET Data is also available through the My Next Move, and My Next Move for Veterans websites; through the O*NET Web Services application programming interface (API); or by downloading the database, particularly by developers who provide applications targeted to specific communities or audiences. How Does O*NET Collect Its Data? The O*NET data set gets updated once a year. O*NET uses a 2-stage sample — first businesses in industries that employ the type of worker are sampled and contacted — then when it is confirmed that they employ those workers and will participate — a random sample of their workers in the occupation receive the O*NET survey form and respond directly. When necessary, this method may be supplemented with a sample selected from additional sources, such as professional and trade association membership lists, resulting in a dual-frame approach. An alternative method, based on sampling from lists of identified occupation experts, is used for occupations for which the primary method is inefficient. This method is reserved for selected occupations, such as those with small employment scattered among many industries and those for which no employment data currently exist on which to base a sample, such as new and emerging occupations. At its current funding level, O*NET updates slightly more than 100 occupations per year with new survey data. What Are The Survey Response Rates? Currently, more than 144,000 establishments and 182,000 employees have responded to the survey request, resulting in an establishment response rate of 76% and an employee response rate of 65%, which compare favorably to other establishment surveys. O*NET Usage Statistics Current O*NET Usage Statistics During 2014, the O*NET websites (O*NET OnLine, My Next Move, My Next Move for Veterans, Mi Proximo Paso) averaged over 4 million visits per month. Fig. 2 shows the total site visits by year from 2002 to 2013. Fig. 2: Total Site Visits for O*NET web properties There is some pattern of seasonality that appears to follow the school calendar. Peak months with over 5 million visits in 2014 were: April, September, and October. The O*NET database has been downloaded 83,573 times from 2002 to 2014. Fig. 3 shows the database downloads in each year. Fig. 3: O*NET Database Downloads, 2002 - 2014 O*NET Web Services and APIs were introduced in 2012 and their usage is steadily increasing through 2014. At the close of 2014, there were 343 total registrants for O*NET Web Services. Fig. 4 shows the progress since 2012. Fig. 4: O*NET Web Services Registrants, 2012 - 2014 The breakdown of the users of O*NET Web Services is detailed in Fig 5. Fig. 5: Registrant Composition for O*NET Web Services Without the help of the O*NET team, this would not be possible. Thank you. A lot of the content comes from a factsheet that the team worked on.
![]() This blog post is not about what you think it is about; and yet it is precisely what you think it is about. Let me explain. A few days ago, I was reading an article, which was originally published on October 3rd, 2014, entitled "The White Problem". The author is Quinn Norton, who is a white journalist that covers hackers, bodies, technologies and the Internet. Firstly, her post and her part 2, "How White People Got Made", should be required reading for all Americans. Secondly, it raised several important points that most students of the humanities, and the history of peoples' interactions over the centuries, would appreciate and easily identify as artificial constructs erected in an effort to oppress and maintain privilege, which was and is the dominant driving force. For more context, read one of my all-time favorite books - "Pedagogy of the Oppressed". Tucked away in Quinn's paragraph was a sentence, which seemed to be nonchalantly included and inconspicuously placed, that caught my eye and that perfectly represents the state of the computing industry today: "Whites spent hundreds of years excluding others from resources, often by violence, and claiming ownership over wealth created by those same non-whites." ![]() At the highest conceptual level, replace Whites with Tech Company Owners and you get a glimpse of what I saw. The last wave of uber-successful technology companies have two things in common:
A consequence of the second point is that content generators make pennies on their content; while content distributors and everyone else higher in the supply chain make orders of magnitudes more money. It is standard practice for the current (and emerging) set of tech companies to write into either Terms Of Use Agreement, Privacy Policy, End User License Agreement or other user agreement document that information supplied or generated by the user is property of the company and can be viewed as their asset that they can use to serve their purposes. The ultimate purpose is making a profit for the business owners. Their user base do not see any compensation from the use of their data. To selectively quote Quinn, "Whites spent ..... years excluding others from resources, often by violence, and claiming ownership over wealth created by those same non-whites." The violence component enters the picture when you think of the strategies that these companies employ to either attain or maintain their positions of dominance. This includes wage suppression of their employee pool, corporate espionage of their competitors, subverting the market protections and creating barriers to entry and growth through legislative and other means, etc. If the current American premise, only supported by the Citizens United ruling, is to be believed that corporations are people too, then all of the acts mentioned above would constitute violent, aggressive acts against real and corporate beings. If you don't believe that corporations are people, then the above behavior is simply immoral, callous and selfish on the part of the titans of corporate America and a direct jab at everyone else in society. Proponents may mention that this is all just good business and it is simply how the free market works. The Free Market is a product of everyone in it. It is dynamic. It is not something that is external and uncontrollable (or rather un-influence-able). We all help define, create and evolve the free market and the rules that acceptable or not. Thus, behavior in the sole interest of maximizing profit for your shareholders (and ignoring the potential severely negative consequences of one's actions) is not an unchangeable law of nature. It is yet another tool used by "whites" to wrestle control of resources from everyone else. All of this is further complicated when you examine the makeup of the company owner base and the user base. The company owner base has a high percentage of white men. The user base has a high percentage of minorities. The employee mix of these companies is predominantly white and Asian males - an issue that has been written about enough elsewhere [See Silicon Valley's Diversity Problem and How Diverse is Silicon Valley]. I leave the rest for you to ponder. "It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way - in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only."
- A Tale of Two Cities (1859), Charles Dickens. ![]() The article by Jules Polonetsky (Executive Director, Future of Privacy Forum) on "Advice to White House on Big Data", which was published today (April 1st, 2014) , brought home an important point to me. The right conversation is not being had. A honest discussion is not taking place about the difficulty of the issues to be addressed with regards to Big Data Privacy. When leaders and policy makers, who are not technical experts in the space, are provided with guidance that obfuscates the real issues, it will not lead to situations that are good for the general public. Thus, I feel compelled to speak up. The segment of Jules' article that forced me to comment was "While the Federal Trade Commission (FTC) has acknowledged that data that is effectively de-identified poses no significant privacy risk, there remains considerable debate over what effective de-identification requires." There is so much nuanced truth and falsity in that statement. It makes me wonder why it was specifically phrased that way. Why lead with the FTC's assertion? Why not simply state the truth? Is it more important to be polite and show deferrence than it is to have a honest conversation? The current chief technologist of the FTC, Latanya Sweeney, demonstrated over a decade ago that re-identification of de-identified data sets was possible with around 80+ percentage of supposedly safe/de-identified data (read more). This is a fact that I am highly confident that most of members of the Future of Privacy Forum are well aware of. So, why lead with a statement with limited to no validity? This confusion lead me to my comment on the article. However, let me re-state it here and provide a bit more detail. What is De-Identification? Simply put, de-identification is the process of stripping identifying information from a data collection. ALL the current techniques to enable de-identification leverage the same basic principle - how can one hide an arbitrary individual (in the data set) in a (much larger) crowd of data, such that it is difficult to re-identify that individual. This is the foundation of the two most popular techniques k-anonymity (and it various improvements) use generalization and suppression techniques to make multiple data descriptors have similar values. Differential privacy (and its enhancements) add noise to the results of a data mining request made by an interested party. The Problem At this point, you are probably saying to yourself "This sounds good so far. What is your problem, Tyrone?" The problem is that the fundamental assumption upon which de-identification algorithms are built on is that you can separate the world into a set of distinct groups: private and not-private*. Once you have done this categorization, then you easily apply a clever algorithm to the data and then you are safe. Viola, there is nothing to worry about. Unfortunately, this produces a false sense of safety/privacy, because you are not really safe from risk. Go to Google Scholar and search on any of these terms: "Re-identification risk", "De-identification", "Re-Identification". Read Nate Anderson's article from 2009 - “Anonymized” data really isn’t—and here’s why not. Even better, get Ross Anderson's slides on "Why Anonymity Fails?" from his talk at the Open Data Institute on April 4th, 2014. In Big Data sets (and in general data sets), the attributes/descriptors that are private change depending on the surrounding context. Thus, things that were thought to be not-private today, may become private a second after midnight when you receive new information or context. For Big Data sets (assuming you are merging information from multiple sources), there is no de-identification algorithm that will do anything more than provide people with a warm and fuzzy feeling that "we did something". There is no real protection. Let's have a honest discussion about the topic for once. De-identification as a means of offering protection, especially in the context of Big Data, is a myth. I would love to hear about the practical de-identification techniques/algorithms that the Future of Privacy Forum recommends that will provide a measurably strong level of privacy. I am kind of happy that this article was published (assuming it is not an Aprils' Fool Joke) because it provides us with the opportunity to engage in a frank discourse about de-identification. Hopefully. *I am stating the world in the simplest terms possible; for the general audience. No need to post any comments or send hate mail about quasi-identifiers and or other sub-categories of data types in the not-private category.
NB. You will also notice that I have not brought up the utility of de-identified data sets, which is still a thorny subject for computer science researchers. ![]() It occurred to me this morning, while going through my news feeds, that it may not be obvious to everyone why companies do not (and are hesitant) to protect customer data. The "Really?" moment came while I was reading "Customer Data Requires Full Data Protection" by Christopher Burgess. I took it as a given that most people knew intuitively why enterprises choose not to protect customer data; as they do their intellectual property. It never occurred to me that it was a mystery to the general public or that it was up for discussion or even an issue worthy of thought cycles by the industry punditry. This leads me to the obvious. Risk Customer data is their asset with the lowest risk profile. Even though it is necessary to help with the successful management of the customer relationship and for some businesses it is the driving force behind their value (or valuation), the impact of compromise (or damage) of that data has (relatively) little impact on the company itself. In legal terms, "harm" is done primarily to the data owner ("customer"), not the data steward ("company"). For example, each of the hundreds of millions of people affected by the Target breach face a lifetime of vigilance over their financial identity and activity. The possible harm is significant and the total impact on the data owners could reach the order of hundreds of billions of dollars. The possible harm for Target will be capped by legislative action and will be a (small) fraction of the company's profit margin. Over the long term, Target can weather this storm and still be a viable company - making this an acceptable risk. However, for their customers, this is potentially a life-altering event from which they cannot recover. The Expense of Data In most cases, customer data is either donated by the customer or gathered by the company's customer relationship managers. Compared to acquiring patents to protect the firm's business processes or generating information on optimizing their internal operations, the price and cost of customer data is negligible. Cost- Benefit Tradeoff of Protection Though the benefits of protecting data are well-established and the current trend of multiple daily attacks is not dissipating, the discipline of data protection is a risk management process (and rightly so). Protection technology is expensive to implement and incorporate into an existing business, has an (often negative) impact on the internal operations of the business (i.e. it impacts how you perform your core functions, it impacts the performance of those functions, it impacts the requirements needed to execute these functions) and is viewed primarily as a cost center (with no real, measurable return on investment at the time of installation). Thus, data protection is a defensive investment with perceived value only after security and privacy incidents have been thwarted. So, companies choose to deploy data protection technologies for the data that are of the highest value to them. You put these factors together and you get our current state of affairs, where "cheap", "low-risk" (to them) customer data is often left unprotected because the benefit of doing so is not worth the cost of doing so. It becomes an acceptable (and tolerable) business risk that they can rationally take. Unfortunately, I believe this perspective is flawed and will do more harm than good in the long term. The first step in solving this issue is to have companies realize that the damage done when customer data is compromised will have significant impact on their current and future profitability. In this environment, Security and Privacy are competitive differentiators; at least until all companies are on the same page. 7/30/2013 Invitation to Join Mozilla Reception for San Francisco Premiere of "Terms and Conditions May Apply"Read Now![]() What follows below is an invite from the privacy team at Mozilla: Terms and Conditions May Apply is a new documentary about Internet privacy with impeccable timing. The film is in select theaters across the country starting July 12th and will serve as the launchpad for a social action campaign (to be housed at trackoff.us) that will demonstrate public demand for stronger privacy protections, including baseline privacy standards. Last week the film was deemed a NYT Critics' Pick. About the film: Terms and Conditions May Apply exposes what corporations and the government are learning about you with every website you visit, phone call you make, or app you download, with stories of surveillance so unbelievable they're almost funny. As privacy and civil liberties are eroded with every click, this timely documentary leaves you wondering: if your private information is for sale to the highest bidder, who's doing the bidding? In San Francisco, the film premieres at The Victoria Theatre on August 2nd at 7:15pm. Following the film will be a Q&A with the director, Cullen Hoback, and issue experts, including Harvey Anderson, Mozilla's SVP of Business and Legal Affairs and a representative of the ACLU Northern California. Mozilla would like to invite you to attend a pre-screening reception from 4:30-6:30pm at our San Francisco office. We will be serving light appetizers and beverages, showing some film clips (including the 2012 privacy-related Firefox Flicks winners), and discussing Internet privacy. Please RSVP for the Mozilla reception by end of day on Wednesday. See wiki.mozilla.org/SF for more info on our location. Our office is located in the former Gordon Biersch Brewery space on the Embarcadero and is convenient to BART, CalTrain, Muni, and ferry. There are quite a few things that should be of concern for the average Internet user - your computer can do things that will cause you harm, someone can spy on your Internet traffic, one of your service providers can either sell your data or hand it over to the authorities; without your consent, knowledge, or even compensating you.
Additionally, there is the risk of an arbitrary website owner collecting and looking through your Web surfing behavior. Unfortunately, all of these situations happen a few million times every second. Not knowing that these things are happening is the enabling factor for a trillion dollar industry. Ultimately, you should be uneasy with the possibility of having your identity stolen or being disqualified from opportunities, based on secretly collected data. (excerpt from "Practical Privacy Protection Online For Free". Buy your Copy now) ![]() I have been thinking lately about what it takes to have corporations start seriously thinking about data ownership from the point of view of the people who provide the information. What would it take for an entity, whose business model mainly depends on the self-proclaimed rule - "we store your data, so we own your data", to give up some control (and revenue)? The idea that the owners of the "means of production" would claim that they own "all raw material given to them" is ridiculous in any other field. However, it is acceptable in the IT industry - a discussion I will have in another blog. Back to the main thought - How to get businesses to play fair with the people who give them data? Last week, Gartner hinted to the possible answer and our possible future. In their special report examining the trends in security and risk, Gartner predicted that 90 percent of organizations will have personal data in IT systems they don't own or control. This prediction hints to a future where corporations are losing money and control of their revenue stream - data. It is only a matter of time before corporations figure out that when they provide data to other companies that provide a service to them, the service provider should share the revenue they get from using the gifting company's data. So, I am optimistic that corporations will see the value of creating a data ownership ecosystem - as a matter of self-interest and survival. I am sure they will market it as being for the benefit of the regular Web user. However, I am less hopeful that the claimed benefit of this ecosystem (and revised viewpoints on data ownership) will actually see the pocket of the ordinary Web user. |