top of page

12 items found for ""

  • Privacy-enhancing technologies: Will AI providers be held accountable too?

    [Privacy-Enhancing Technologies] could usher in a paradigm shift in how we as a society protect privacy while deriving knowledge from data. However, there are also risks that PETs could provide a false veneer of privacy, misleading people into believing that a data sharing arrangement is more private than it really is. Alexander Macgillivray & Tess deBlanc-Knowles, U.S. Office of Science and Technology Policy, Advancing a Vision for Privacy-Enhancing Technologies, JUNE 28, 2022, available here. I. Introduction President Biden's October 2023 Executive Order on AI pays special attention to fostering the development and implementation of "privacy-enhancing technologies" (PETs). Sec. 9 (Protecting Privacy) of the Order is focused almost entirely on the topic. The Office of Management and Budget, Federal Privacy Council, Interagency Council on Statistical Policy, Office of Science and Technology Policy, Secretary of Commerce, National Science Foundation all have designated roles to play. Federal regulatory agencies charged with developing and implementing privacy-enhancing technologies (PETs) in the President Biden Oct. 2023 Executive Order on AI The focus is entirely on the development and implemention of PETs by the federal government. There is no discussion on imposing any duties, requirements, or liabilities on the generative AI providers who contribute significantly to the threat to our private information. Let's explore why. II. Privacy-enhancing technologies (PETs) A. All named PETs focus on data repository and sharing protections The term “privacy-enhancing technology” means any software or hardware solution, technical process, technique, or other technological means of mitigating privacy risks arising from data processing, including by enhancing predictability, manageability, disassociability, storage, security, and confidentiality. These technological means may include secure multiparty computation, homomorphic encryption, zero-knowledge proofs, federated learning, secure enclaves, differential privacy, and synthetic-data-generation tools. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Sec. 3(z), Oct. 30, 2023, available here. Data disassociation or deidentification is a key concept for privacy-enhancing technologies. Data deidentification "unlinks" individuals from their sensitive information. Once personal identifiers are removed or transformed using data deidentification, the data can be reused and shared without implicating data privacy (presuming the data can't be re-paired with the individuals it was originally disassociated from, which AI is really, really good at doing). "Disassociability" is defined by NIST as: "[e]nabling the processing of [personally-identifiable information] or events without association to individuals or devices beyond the operational requirements of the system." A full discussion of the 7 different "technological means" for privacy-enhancing technologies (PETs) listed above is beyond the scope of this blog article.1 Like all security measures, they generally entail a trade-off between privacy and utility. It is, e.g., certainly technologically possible to encrypt all data. And it would be legally advantageous to do so, as virtually all states exempt you from any notification requirements in the event of a data breach when you do.2 But the general assumption remains that while highly sensitive information should be encrypted, it would be a bridge too far to impose such a requirement on all just-sorta-sensitive information. The concern is it would be cost-prohibitive to do so and/or it would slow down processing speeds for systems too much.3 Notably, all 7 listed technological means focus on the data repository side. They are repository and sharing protections that can only be implemented by the organizations that manage person-related data for their businesses. B. No PETs focus on personally-identifiable information (PII) detection But what about technologies or practices that the AI providers who actively collect massive amounts of data to train their models—through webcrawlers or otherwise—can implement? Shouldn't AI providers bear responsibility for: mitigating against the collection of private information in the first instance, and/or scrubbing data collected of such private information after the fact? The closest the Biden Oct. 2023 Executive Order gets to this is: Sec. 9.  Protecting Privacy.  (a)  To mitigate privacy risks potentially exacerbated by AI — including by AI’s facilitation of the collection or use of information about individuals, or the making of inferences about individuals — the Director of OMB shall: (i) evaluate and take steps to identify commercially available information (CAI) procured by agencies, particularly CAI that contains personally identifiable information and including CAI procured from data brokers and CAI procured and processed indirectly through vendors ... This responsibility is as top-down as it gets. It is directed at the federal government and its collection or use of any "commercially available information (CAI) procured by agencies, identifiable information and including CAI procured from data brokers and CAI procured and processed indirectly through vendors...." The Director of Office of Management and Budget is directed to "evaluate and take steps to identify" such CAI and to issue a Request for Information and develop privacy impact assessments to mitigate against resulitng privacy risks, "including those that are further exacerbated by AI."4 And as to those who are doing the exacerbating—the generative AI providers crawling the internet to train their AI models? The Executive Order is notably silent toward them and to this entire issue. I acknowledge that it is objectively unfair on some levels to impose liability for the collection of publicly available information. It, however, is also objectively unfair on every level to the public that a single unauthorized release of your social security number, bank information etc. should doom you to having that information permanently publicly available due to the inexorable work of generative AI webcrawlers and model building. In particular since generative AI providers have the perfect tool to screen for and identify after the fact private information in the massive amounts of data that they collect—their very own AI technology.5 III. Conclusion Perhaps that side of the coin is actually being addressed in full elsewhere, and we shouldn't rush to judgment here as to our federal government's efforts here. But there is no right to data privacy in the U.S.6 And there are no laws meaningfully limiting the collection of private information by generative AI providers, requiring them to screen out such private information after collecting it, or preventing them from reselling such information after collecting it. The general rule after all is that if it is publicly available on the internet, then anyone has the right to collect and do what they want with it. A strong case can be made that this is generally how it should be. This general rule, however, becomes problematic when applied to the information that was acquired illegally by a third person who then posts it on the internet. And when generative AI providers and implementers release their webcrawlers with full knowledge that they are collecting such data that no one would want to have publicly released and that it was likely originally acquired and posted from such an illegal data breach or other action without the consent of the "owner" of the data. Unless laws are eventually passed directly addressing these issues, there really isn't any reason for generative AI providers to do anything here, is there? The simple reality is that left unchecked, their incentives are to do just enough to support what they really want—no laws to be passed on this issue at all or better yet laws favorable to their positions, so they can continue to minimize any legal liability they might otherwise become subject to. © 2024 Wood Phillips 1 For the best high-level technological discussion of the subject that I have come across, see Katharine Jarmul, Privacy Enhancing Technologies: An Introduction for Technologists, martin.Fowler.com, May 30, 2023, available here. 2 See The Sedona Conference, Incident Response Guide, 21 Sedona Conf. J. 125, 182-83 (2020). 3 See Rebecca Herold, Top 4 Reasons Encryption Is Not Used, Privacy & Security Brainiacs, March 21, 2020, available here. 4 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Sec. 9(i)-(ii), Oct. 30, 2023, available here. 5 A Google search for "PII detection" and the reams of hits it yields strongly suggests that a lot could be done on this front if so required.... 6 For discussion, see my blog article Your duties to your AI customers and their private data, available here.

  • Algorithmic discrimination: good luck with that

    Bias in, bias out: The algorithmic discrimination challenge. Of all the intractable issues that AI gives rise to, algorithmic discrimination is in my view the most unsolvable. It resides at the convergence of: civil rights and race, gender, and other relations—the biggest political football(s) of our times; and how AI works—e.g., it is trained on data that itself is biased. READ MORE → © 2024 Wood Phillips

  • What the Oct 2023 Executive Order on AI ducks.

    The Oct. 2023 Executive Order on AI: You missed a spot.... While more aspirational than specific, the Oct. 2023 Executive Order on AI was a step in the right direction for addressing these issues. But the Executive Order punts on or omits entirely some key issues.... READ MORE → © 2024 Wood Phillips

  • Guide to U.S. federal agency regulation of AI

    U.S. federal regulation of AI: A visual guide. The below chart is a visual guide to the various U.S. executive agencies’ regulation of artificial intelligence by subject area, based in part on President Biden’s October 2023 Executive Order on AI. Since the focus of the Order is on “the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” it is (perhaps) not… READ MORE → © 2024 Wood Phillips

  • Use of copyrighted works to train your AI

    The law and ethics of generative AI use of copyrighted works. My deeply profound guiding Principle No. 1 for Responsible AI (“Don’t be scum”) starts not being quite so helpful right around here. My IP head and my IP heart diverge here when analyzing generative AI use of copyrighted works to train its AI models. This is where law and ethics collide. Let’s look… READ MORE → © 2024 Wood Phillips

  • Managing your AI customer data

    Your duties to your AI customers and their private data. Any discussion of “responsible AI” and your use of AI customer data simply cannot start with the pretense: “It’s currently not illegal, so we’re good!” As noted in our discussion of deepfake pornography last week, the law simply hasn’t caught up with the technology. Perhaps the same can be said for data privacy issues. But… READ MORE → © 2024 Wood Phillips

  • Legal and ethical issues of bypassing internet paywalls to train your AI

    Your guide to responsible AI contracting. Part 1: Don't be scum. What is “responsible AI”? Most people would agree we should implement AI responsibly and appropriately balance the interests of each stakeholder and society. But AI may be the most disruptive innovation in history, promising to increase overall productivity and displace workers to an unprecedented degree. Figuring out what the proper balance is not easy.… READ MORE → © 2024 Wood Phillips

  • Government regulation might limit the use of trade secrets to protect your AI

    Are we sure trade secrets are the way to go for protecting your AI? Trade secrets for AI are at risk due to potential regulatory measures by the U.S government. The President’s 2023 Executive Order on AI does not address its impact on trade secret protections for AI. AI innovations are often kept as trade secrets, especially for AI providers who use SaaS, which makes reverse engineering impossible. However,… READ MORE → © 2024 Wood Phillips

  • Successful AI implementation will be critical for all businesses in the future

    [Without intellectual property (IP), you are replaceable,...] ...and without implementing AI successfully, you will be replaced. Hyperbole? No, at least not for many industries. Businesses and individuals who figure out how to implement AI successfully will operate far more efficiently than those who don’t. And the already starving artist is facing its greatest existential crisis; there’s seemingly nothing to do but protest. Employee displacement by generative AI—which generates high-quality text, images, … READ MORE → © 2024 Wood Phillips

  • IP is at the core of all companies

    Without intellectual property (IP), you are replaceable.... If your business has no IP, you are replaceable. By any of your competitors. How much of your potential patent, trademark, copyright, and trade secret rights (the 4 main categories of IP) have you realized? This article is the definitive visual guide for understanding IP and your tech company. It provides a strategic roadmap for… READ MORE -> © 2024 Wood Phillips

  • Protecting your IP rights when using AI to innovate

    Your employees’ use of AI in developing your company’s products or services puts your company’s intellectual property rights at risk. According to a Salesforce Nov. 2023 survey: 55% of all employees have used unapproved generative AI tools at work, and 40% of all workplace generative AI users have used banned tools at work. But when your employees leverage generative AI—with and in particular without permission—as part of your company’s research and development or marketing processes, they put your company at risk by: A. Jeopardizing your company’s patent and trade secret rights through: 1. Public disclosure of your company’s proprietary information. All prompts your employees type in to a generative AI and all output that is generated may potentially be deemed public disclosures. This potentially destroys any proprietary rights you might have over the information and also potentially invalidates any IP rights over any innovations you may have developed based on it. 2. Loss of ownership of your company’s patent rights. Your employees’ use of generative AI in R&D inherently leaves your company subject to subsequent challenges that your inventive process did not have the requisite “significant contributions” from a human being to be eligible for a patent. B. Putting your company at risk of third-party copyright and other IP infringement claims. Your employees’ use of AI to generate text, images, music, etc. inherently puts your company at risk of third-party copyright and other infringement claims based on the AI’s use of copyrighted works to train its models. Mitigating against these risks—and against spiraling litigation costs spent countering such ownership, invalidity, and third-party infringement challenges—from the start will help you sidestep such future landmines. © 2024 Wood Phillips

  • Implement your AI, mitigate against your AI legal risk

    If your business owns no IP, you are replaceable by any of your competitors. And without implementing AI successfully, you will be replaced. In the U.S., your business will be subject to greater regulatory scrutiny if it involves the use of AI in a “sensitive domain,” including financial systems, healthcare, housing, education, and employment issues (see visual guide below). Employee displacement, and even the displacement of entire professions, by generative AI will be perhaps the greatest societal issue of our times. As such, AI-related issues will inevitably comprise a disproportionate share of litigation in the coming years. The use of AI gives rise to various types of AI legal risk, including: IP infringement claims, including: claims against the output of your AI, e.g.: copyright infringement right of publicity (via deepfakes) claims against your application of AI, e.g.: patent infringement trade secret misappropriation “algorithmic discrimination” claims, including for your use of any AI in automated employment decision tools data privacy violations, including: lack of sufficient protection of data collected from your AI customers unauthorized sale of your AI customers’ data unauthorized data mining to train your AI models fraudulent and negligent misrepresentation claims against the output of your generative AI, including: deepfakes AI “hallucinations” misuse of your AI for terrorism (including bio and nuclear), cyberattacks, attacks on financial systems, etc. Your primary defense against a claim directed against your business’s use of AI will be fundamentally an issue of compliance with and/or whether you have taken “reasonable measures” with respect to: existing principles of IP and technology law, existing applicable regulatory regimes, and most importantly the AI-specific legal and regulatory regimes that various national and state governments will develop in the coming months and years. Your business’s ability to set up effective company policies and negotiate the AI terms in your contracts in compliance with applicable laws and regulations—not just as they currently are but where they are headed—will define the scope of your potential liability for AI legal risk in the future. Wood Phillips provides intellectual property and artificial intelligence audits for your business, assessing areas of potential improvement for your policies, procedures, and contracting on these issues. We provide counsel for all the ways that IP and AI issues can and will impact your business. © 2024 Wood Phillips

bottom of page