Today, we're all walking around with mini-computers in our pockets more powerful than all of NASA’s combined computing in 1969. In fact, most of us can hardly remember a time before smartphones—despite the fact that the first iPhone launched in 2007.
Tech evolves at a breakneck pace, but regulation is often slow to catch up.
Here’s what I mean:
Companies have been using our personal data to target sales and advertising for years, but it wasn’t until last year that lawmakers stepped intervened in an effort to protect consumers.
On the heels of growing privacy concerns, the EU passed a regulation called the General Data Protection Regulation (GDPR) to give individuals more control over their personal data. It has since triggered major conversations about privacy across various sectors of the tech industry—and put a keen eye on how people are thinking about implementing software.
What this means is, today, it's harder than ever to start a software company.
And with increasing regulations shifting the burden of privacy breaches onto companies, it’s becoming even harder.
Here’s what you need to know about the recent and upcoming tech regulations that are sure to shape Silicon Valley, the tech industry as a whole, and society at large:
The data privacy crackdown continues apace.
In a post-GDPR world, privacy might be the single biggest issue consumers care about.
Thanks to high-profile data breaches and scandals—looking at you, Facebook—growing numbers of consumers are aware that their personal data is being collected and exploited. While many users are opting out by quitting their cherished social media services, lawmakers are also starting to step in to protect consumers.
GDPR requires organizations to obtain verifiable consent from EU residents that is explicit, informed, and freely given.
But GDPR isn’t the only major regulation making waves in Silicon Valley. Last year, California passed a sweeping consumer privacy law that might force significant changes on companies that deal in personal data. In particular, the Act affords California residents the right to be informed about what kinds of personal data companies have collected and why, the right to request the deletion of personal information, the right to opt out of the sale of personal information, and the right to access the personal information in a “readily useable format” that enables its transfer to third parties. It’s been called the most stringent data protection regime U.S. history.
And just this January, Senator Marco Rubio proposed the ADD Act, which would task the Federal Trade Commission (FTC) with proposing privacy rules that Congress could implement as law. These rules would supersede existing state laws, creating a national framework for privacy. While the bill doesn’t give the FTC new power to regulate companies like Google and Facebook, it does allow the FTC to create binding rules if Congress failed to do so within two years.
These increased regulations provide consumers with protections against unwarranted intrusions from sophisticated actors in the private tech industry.
They also shed light on where the tech industry falls short and allow for introspection.
In the wake of these regulations, all tech companies should really be taking a hard look at how protected (or not) their customers are.
Regulation around AI is heating up—and for good reason.
Automation is disrupting every single industry, from manufacturing to clerical work.
And I’ve seen this firsthand.
I grew up in a blue-collar, working-class family in the Midwest. Over the course of my childhood, my dad was laid off many times—due to automation. While the economy was improving, automation nonetheless ransacked countless families like mine throughout the nation.
But today, automation and the capabilities of AI technology are increasing exponentially, meaning more and more jobs are on the chopping block. In fact, according to a McKinsey report, some 375 million jobs worldwide will vanish by 2030.
Beyond job loss, there are a number of other ethical concerns worth considering. Take self-driving cars. If an AI-powered car is faced with the split-second dilemma of hitting either a woman and a stroller, a homeless man, or a business executive, who writes the algorithm that makes that choice? Who decides who gets to live or die? Should it be computer programmers?
This is where we need regulation and government intervention.
Tesla’s Elon Musk, who believes AI presents a “fundamental risk to the existence of human civilization,” calls for proactive regulation “before it is too late.” Meanwhile Bill Gates, who believes AI will “allow us to produce a lot more goods and services with less labor,” foresees labor force dislocations and has suggested a robot tax.
President Trump’s new executive order “Maintaining American Leadership in Artificial Intelligence” calls for a coordinated approach to AI regulation. This next year will see the emergence of a U.S. regulatory framework.
But until that framework is in place, companies should focus on using AI to help humans, not replace them or their judgment.
Copyright laws are evolving.
The recent expansion of internet copyright regulation is a controversial topic—to say the least.
Copyright laws weren’t created for the digital age. Technology and internet connectivity have made it easier than ever for an individual to have an idea, record it in words, images or sound, and then release it to the world.
In this kind of environment, copyright reform is crucial.
And while these laws will create increased burdens on companies, they’re mostly positive. The onus of copyright infringement will no longer lie with the individual.
This year, a majority of European governments signaled support for a deal that would overhaul copyright laws, impacting everything from YouTue to meme channels. The idea behind the Directive on Copyright in the Digital Single Market, colloquially known as the EU Copyright Directive proposal, is to instill aggressive protection for rightsholders by making the platforms hosting user content liable for copyright infringement. User content will run through even more aggressive filters and force platforms like YouTube to “take measures to ensure the functioning of agreements concluded with rights-holders for the use of their works,” as the bill states.
If GDPR is any indication, American regulations will follow suit.
But increased regulation may hit businesses where it hurts—and stall innovation.
While regulation helps protect people and ideas, the cost of responsible tech will undoubtedly fall to businesses—and it should.
Perhaps more importantly, this increased regulation is likely to have a major impact on the pace of innovation. Companies won't be able to do as much with consumer data, which will likely slow the rollout of feature-rich applications we’ve come to love and expect. For instance, apps that use consumer data to drive automation, that help users with discovery via large amounts of personal data (think fitness and wellness apps), or that use AI and user-created data to drive intelligence.
But, of course, it’s not all bad news. Copyright regulation will also force companies to be more accountable and empower the individual consumer.
Right now, if you upload something you've made, and then someone else steals it and posts to their site—it's on you, the individual, to figure that out and to try to get it taken down. But with more regulation, the company who infringes will be held responsible.
We're just seeing the beginning of the regulation onslaught, and it’s going to force some major changes. Only time will tell just how big an impact these regulations will have on tech and the future accessibility of the field to newcomers.
Either way, it’s time companies start strategizing.