An AI safe harbor provision would create guidelines for development and security without premature regulations
The conversation around Artificial Intelligence has begun to take on a binary quality, rather prematurely, as if we are debating two sides of a coin rather than a more complex form. “Let builders build as is” vs “Regular”. Ironically, both positions are the result of recognizing the incredible initial power and promise of the tipping point we have reached, but neither incorporates ambiguity. Fortunately, there is some case law here that can help, and we just need to go back to the earlier days of the Internet and the concept of safe harbor.
“Safe harbor” is a regulatory framework that provides that certain conduct does not violate a rule as long as specific conditions are met. It is used to provide clarity in an otherwise complex situation, or to give a party the benefit of the doubt as long as they meet generally acceptable standards of reasonableness. Perhaps the best-known example in our industry is the Digital Millennium Copyright Act (DMCA) of 1998, which provided a safe harbor for Internet companies from copyright infringement by their end users as long as several preconditions are met (such as direct financial benefit, expertise). of infringing materials, etc.).
The DMCA allowed billions of people around the world to express themselves online, sparked new business model experiments, and created barriers for any business to stay legal. It’s not perfect, and it can be abused, but it knew the reality of the moment in a meaningful way. And it made my career possible, working with user-generated content (UGC) in Second Life, AdSense, and YouTube. During my time at the world’s largest video site, I coined the ongoing public metric “#hours of video uploaded every minute” to help put YouTube’s growth into perspective and frame for regulators how unfathomable and unfathomable it would be reliable to ask humans to review 100.% of the content manually.
Now, 25 years later, we have a new wave, but it’s not UGC, it’s AI and, er, user-generated computer content (UGCC), or something. And from my perspective, it’s as significant a potential shift in capabilities as anything I’ve experienced so far in my life. It’s the evolution of what I expected: not the software that eats the world, but the software that enables it. And it’s moving very, very fast. So much so that it’s perfectly reasonable to suggest that the industry slow down, specifically stop building new models while we all digest the impact of the change. But it is not what I would advocate. Instead, we’re accelerating the creation of a temporary safe harbor for AI, so our best engineers and companies can continue to innovate while being incentivized to support Rails and Open .
What would an AI safe harbor look like? Start with something like: “For the next 12 months, any developer of AI models would be shielded from legal liability as long as they meet certain evolving standards.” For example, model owners must:
- transparency: For a given publicly available URL or submitted media, to query if the top-level domain is included in the model’s training set. Simply put, visibility is the first step: the whole “don’t train on my data” thing (aka robots.txt for AI) will need more thought and trade-offs from a regulatory perspective.
- Request records for research: Provide a statistically significant amount of prompt/input records (no requester information, just the request itself) on a regular basis for researchers to understand, analyze, etc., whenever not you do consciously, voluntarily and exclusively. by targeting and exploiting particular copyrighted sources, you will have safe harbor from infringement.
- Responsibility: Documented trust and security protocols to enable escalation of violations of your Terms of Service. And some kind of transparency statistics on these issues as a whole.
- Observability: Auditable, but not public, frameworks for measuring the “quality” of results.
To avoid a burden that means only the largest and most well-funded companies can comply, the AI Safe Harbor would also exempt all startups and researchers that have not yet released public-base models and/or have less than, say, 100,000 queries/ requests per day. These people are simply “safe” as long as they act in good faith.
Within the 12-month period, AI Safe Harbor would be extended as is; modified and renewed; or suppressed by general regulations. But the goal is to remove ambiguity + start steering companies towards common standards (and common good), while maintaining their competitive advantages locally and globally (China!).
What do you think?
GET ALL MY POSTS IN YOUR INBOX FOR FREE REGISTRATION HERE
At Ikaroa, we believe that Artificial Intelligence (AI) is going to play an increasingly important role in our lives. From helping us make sense of Big Data to automating processes and providing us with better customer experiences, AI is rapidly becoming the future of technology. And yet, many people have voiced concern about the speed with which AI is being developed, suggesting that we should “slow down” to avoid any potential pitfalls.
At Ikaroa, we believe that instead of asking AI companies to slow down, we should be encouraging them to move even faster. After all, the faster AI companies develop their solutions, the sooner we will benefit from their introduction into the mainstream. This is true for both existing companies and startups, who need to consistently innovate and develop the latest technologies in order to remain competitive.
Moreover, faster AI technologies can help us overcome some of the most pressing problems that we face today. For instance, AI could help us respond to climate change faster and more efficiently, identify and remediate areas of pollution, as well as reduce traffic fatalities and major crashes. By investing in and advancing AI technology, we can make the world a better place for everyone.
At Ikaroa, we are always looking for ways to leverage the latest and greatest technology to better serve the customers and organizations that we work with. We are committed to creating solutions that have a real impact, and that require both the speed and agility that AI provides. We hope that by doing so, we can help drive technology forward and create a more connected, efficient and secure world.
In conclusion, instead of asking AI companies to slow down, we should be encouraging them to move even faster. We believe that through AI, we can create innovative applications, remediate environmental damage and make the world a safer place for everyone. Ikaroa is dedicated to developing cutting-edge technologies and creating solutions that have a real impact. To that end, we will continue to work tirelessly to further propel the world of AI.