top of page

13 items found for ""

  • Regulating the regulators: Ensuring patent examiners use AI "responsibly"

    The patent examination process—whereby the U.S. Patent and Trademark Office reviews patent applications and issues or grants those that meet the requirements for patentability—is tailor-made for the implementation of AI. But what are the risks to the quality and fairness of the patent examination process when the USPTO implements AI? And what policies and procedures should the USPTO put in place to mitigate these risks? I. AI is made for patent examination The work of patent examiners is part art and part science. It is simply not humanly possible for any patent examiner to both 1.) know all relevant prior art publications and 2.) apply the voluminous rules set forth by the patent law (as interpreted by the U.S. federal courts) and the USPTO on all the prior art to evaluate whether a patent application "reads on" (i.e., is substantially identical to and thus is "anticipated" under 35 U.S.C. § 102) each one or a combination of them (i.e., is obvious under 35 U.S.C. § 103). Certainly not after the explosion of the volume of prior art that came with the start of the information age and the rise of the Internet. But training an AI model to build the perfect patent examiner—or more precisely a model that replicates the patent analytical frameworks and best practices employed by top patent examiners and that "knows" the Internet etc. (or relevant subsets thereof)—is well-within the realm of possibility in our incipient AI age. This is precisely the combination of massive amounts of data plus complicated but rule-bound analysis that AI excels at. As discussed in more detail below, to date, the USPTO has developed and implemented discriminative AI tools to assist with prior art searches. There have been no formal USPTO discussions of developing or implementing the generative AI tools necessary for invalidity analyses to date. II. The Oct. 2023 Biden Executive Order on AI's application on patent examination The Oct. 2023 Biden Executive Order on AI encourages agencies to carefully implement AI by "limit[ing] access, as necessary, to specific generative AI services based on specific risk assessments, ... at least for purposes of experimentation and routine tasks that carry a low risk of impacting Americans' rights." [1] The Order further directs the Office of Management and Budget (OMB) to "coordinate the use of AI across the Federal Government", including by "improv[ing] transparency for agencies' use of AI by "collect[ing], reporting, and publish[ing] agency AI use cases, pursuant to section 7225(a) of the Advancing American AI Act." [2] On Mar. 28, 2024, the OMB issued its first government-wide policy "to mitigate risks of artificial intelligence (AI) and harness its benefits" pursuant to the Oct. 2023 Executive Order. [3] The OMB policy states: By December 1, 2024, Federal agencies will be required to implement concrete safeguards when using AI in a way that could impact Americans’ rights or safety. These safeguards include a range of mandatory actions to reliably assess, test, and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI. [4] The development of such safeguards and regularly reporting of their effectiveness will presumptively be required for both the USPTO's existing prior art search tools and any generative AI tools the USPTO develops to conduct invalidity analyses in the future. The level of detail required will presumably be lower for the former and higher for the latter. III. The USPTO's AI initiatives to date have been discriminative The USPTO's AI initiatives to date have been limited to two areas, both discriminative: 1.) prior art searching; and 2.) automatic patent classification. [5] A. Prior art searching 1. The USPTO's application of AI The USPTO released a beta version of its "More Like This Document" prior art search tool to a subset of examiners in March 2020. [6] "More Like This Document" uses AI algorithms to generate a list of domestic or foreign patent documents that are similar to a specific patent document. [7] In October 2021, the USPTO released to all examiners the AI-based “More Like This Document” feature in its Patent End-to-End (PE2E) Search platform. [8] A year later, the USPTO introduced a new AI-based tool: the Similarity Search feature. [9] This improved upon the "More Like This Document" search tool, with the "anchoring" unit to be "copied" no longer needing to be an entire document. [10] With Similarity Search, paragraphs, sentences, or phrases within the document can be used instead, allowing for searches focused on specific concepts. [11] 2. USPTO's risk mitigation for AI-powered prior art searches a. Just because an implementation of AI hurts you, doesn't necessarily mean it's prejudicial.... As of October 2023, examiners reportedly made 1.3 million searches using these AI search tools. [12] But as critical as USPTO prior art searches are to the patent prosecution and examination process, the USPTO's AI-powered "More Like This Document" and "Similarity Search" applications do not bear much risk of bias or infringement on any patent applicant rights on their face. Some number of patent applications have undoubtedly been rejected due to AI finding relevant prior art that would not otherwise have been located or considered. In particular, much foreign prior art that would have never come to light in its untranslated form is now on the table. But at least here the USPTO appears to be simply reaping such pure "better, faster, and cheaper" benefits that the machine-learning and omniscient AI inherently offers. Patent applicants can't exactly claim their rights have been prejudiced here, in particular when the AI-located prior art is applied as anticipatory prior art under section 102. It is the patent examiner that ultimately evaluates whether such prior art reads on each and every element of each patent claim, using the exact same process they use for prior art they locate through legacy non-AI tools as well. b. Mitigating against bias due to demographic information. The USPTO mitigates whatever potential model biases may result from applicant, inventor, and assignee information by excluding this information from the training data entirely. [13] c. ... and sunlight remains the best disinfectant.... The USPTO further ensures that the public receives clear notice when aspects of the examiner’s search were performed using AI. "When an examiner selects a Similarity Search query to be included in the search notes of the application file wrapper, all documents retrieved by that query, along with the query itself, are listed in the search notes." [14] Such transparency mitigates the risk of AI hallucinations contaminating the prior art search results. All prior art found thru such AI-powered tools, the database from which they are found, the AI-powered tool and search query used to locate it are all saved and presented in the patent examiner's search record, and thus can be separately verified. B. Automatic classification of patent applications into field of invention When the United States Patent and Trademark Office (USPTO) receives a patent application, it classifies the application before the examination process begins. The USPTO classifies incoming patent applications to: identify and group the technology captured in an incoming application; match the technology in an application to a patent examiner; assign examination time to an application; and search for and retrieve relevant prior art. For years, going back to at least the early 2010s, the USPTO relied on contractors for initial classification and reclassification services. Most recently in 2015, the USPTO paid $95 million to a third party contractor for a five-year contract. [15] The USPTO has been actively trying to improve its classification system to allow the better matching of patent applications to the technological expertise and patent examination experience of examiners. [16] 1. The USPTO's application of AI By 2020, the USPTO "developed an auto-classification tool that leverages machine learning to classify patent documents using the Cooperative Patent Classification (CPC) system." [17] The system can suggest CPC symbols, and includes the ability to identify claimed subject matter for additional refinement of the suggested CPC symbols similar to our AI search system. The auto-classification system also includes indicators that provide users with insight into the reasoning of the AI, by linking suggested CPC symbols to specific portions of the document. Enhanced feedback mechanisms designed into the system integrate with our existing classification processes to support training the AI. [18] A contracted classifier can take months to classify patent applications. When applying its CPC auto-classification tool, this bottleneck is presumptively alleviated. [19] 2. USPTO's risk mitigation for AI-powered automatic classification of patent applications This author has been unable to find any USPTO publication that comprehensively lays out how its AI-powered CPC autoclassification system was rolled out in the past. Or how it is implemented today. As such, it is impossible to assess any risk mitigation steps that the USPTO takes for this system. From the scattered information that is available, it's unclear: whether the auto-classification AI is used today for all patent applications? whether contractors are still part of the classification process today, and if so, how? whether human review and approval is required before any AI autoclassification is actually implemented? And if so what does this entail? what supervisory oversight is provided either pre-implementation or post-implementation (e.g., when a patent applicant challenges the classification)? 3. ...impactful, but again, is it really prejudicial...? How a patent is classified can significantly impact not only the timetable for a patent examination process but even whether or not it is granted. Certain patent classification groups are known to have higher issuance rates (i.e., to grant more patents) and/or to conduct their patent examinations faster than others. [20] Sophisticated patent prosecutors strategically draft patent applications and steer them toward favorable patent classification groups and away from unfavorable groups. [21] The USPTO's AI-powered CPC autoclassification system presumably reduces the effectiveness of such attempts. But it is not exactly a fundamental or protectable right to "patent classification group"-shop either. IV. Potential future applications of generative AI in patent examination raise concerns In the USPTO's current limited applications of AI in its "More Like This Document" and "Similarity Search" tools, the AI is simply getting more and more relevant prior art before patent examiners, who are otherwise left to carry out the same invalidity analysis that they have always carried out. In other words, human patent examiners are carrying out the invalidity analysis and are using these discriminative AI tools to simply collect relevant prior art for their analyses. But generative AI tools will inevitably be developed that can carry out more and more of the core invalidity analysis that could previously only be done by human patent examiners. Even if there is a formal or informal USPTO policy barring the use of ChatGPT etc. by patent examiners today, it would frankly be surprising if at least some patent examiners were not already experimenting with using ChatGPT "offline" for such analyses. [22] No one wants AI to make dispositive determinations impinging any individual rights in any field, including patent law. But when AI-generated "recommendations" along lines of "Claim 1 may be obvious over prior art reference nos. 1 (covering element a, c, and f), 2, and 3" start being made available to patent examiners—whether sanctioned or unsanctioned by the USPTO—and provide some form of ranking of such recommendations, the USPTO will have a fundamental problem. As such generative AI capabilities and outputs improve, it is easy to see the nominally independent, free-thinking patent examiner trending more toward a rubber stamp of the AI's recommendations, at least in some cases. Given the complexities inherent to any patent invalidity analysis, it is impossible to predict how this all would play out in reality. Some areas of concern which implicate potentially undue substantive impacts that come to mind include: patent examiners may become less likely to concede that the prior art found via AI searching is not directed to the field of the invention than they would have on their own. patent examiners will become more likely to combine a higher number of secondary references for a section 103 rejection when so recommended by AI than they otherwise would have. V. Conclusion The USPTO should continue its efforts to implement AI into its patent examination processes. Generative AI has the potential to significantly alleviate many of the worst deficiencies of the U.S. patent system, most notably issues of patent quality and overall time and cost of the examination process. Any reduction in the "PTAB gap" (i.e., the gap between the patent invalidity analyses made during the initial patent examination process and those made during inter partes review (IPR) before the Patent Trial and Appeal Board (PTAB)), presuming achieved on principled grounds, would make the overall patent system more efficient many times over. But given the rapid development of the capabilities of AI today, we will quickly leave the realm of "routine tasks that carry a low risk of impacting Americans' rights" and enter complicated areas of direct or indirect substantive impact on the patent examination process. The federal government has set up its broad framework for how USPTO and OMB oversight will theoretically mitigate any undue impact that the implementation of AI may have on the rights of patent applicants. The devil as always will be in the details. [1] Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, October 30, 2023, Sec. 10.1(f) ("To advance the responsible and secure use of generative AI in the Federal Government), available here. [2] Id. at Sec. 8(e). [3] Fact Sheet: Vice President Harris Announces OMB Policy to Advance Governance, Innovation, and Risk Management in Federal Agencies' Use of Artificial Intelligence, March 28, 2024, available here. [4] Id. [5] Both applications are examples of discriminative AI (i.e., a machine-learning model that is trained to classify data into groups). The USPTO developed them before OpenAI's Nov. 2022 release of ChatGPT, an example of generative AI (i.e., a machine-learning model that is trained to create new data), which has ushered AI into the public consciousness. [6] Artificial intelligence tools at the USPTO, Director's Blog: the latest from USPTO leadership, May 18, 2021, available here. [7] New PE2E Search Tool Using AI Search Features, 1494 OG 251, Jan. 11, 2022, available here. [8] Id. [9] 2022 Annual Report, Patent Public Advisory Committee, Nov. 1, 2022, available here. [10] Id. [11] Id. [12] See Annalise Gilbert, Invention Applications Face AI-Backed Scrutiny at Patent Office, Bloomberg, Oct. 2, 2023, available here. [13] New artificial intelligence functionality in PE2E Search, USPTO, at 6, available here. [14] Id. ("The search results produced using emphasized text and/or CPC symbols for a given application will be listed in the PE2E search history in addition to the application number and any applicable emphasized text snippets and CPC symbols, except where the inclusion of the text or the application number would violate the confidentiality provision of 35 U.S.C. 122(a)."). [15] Press Release: Serco Awarded $95 Million Patent Classification Contract, Nov. 30, 2015, available here. [16] See also USPTO Needs to Improve Oversight and Implementation of Patent Classification and Routing Processes, Dept. of Commerce, Office of Inspector General, OIG-23-026-A, click here. [17] USPTO, supra note 14. [18] Id. [19] For background discussion re the patent classification process and the impact of delays, see Carl Oppedahl, When USPTO Classifies an Application Incorrectly, IPWatchdog.com, Mar. 11, 2014, available here. [20] How Classification Works at the USPTO: Targeted Drafting to Influence Prosecution Outcomes, LexisNexis, June 16, 2020, available here. [21] Id. [22] According to a Salesforce Nov. 2023 survey: 55% of all employees have used unapproved generative AI tools at work, and 40% of all workplace generative AI users have used banned tools at work.

  • Privacy-enhancing technologies: Will AI providers be held accountable too?

    [Privacy-Enhancing Technologies] could usher in a paradigm shift in how we as a society protect privacy while deriving knowledge from data. However, there are also risks that PETs could provide a false veneer of privacy, misleading people into believing that a data sharing arrangement is more private than it really is. Alexander Macgillivray & Tess deBlanc-Knowles, U.S. Office of Science and Technology Policy, Advancing a Vision for Privacy-Enhancing Technologies, JUNE 28, 2022, available here. I. Introduction President Biden's October 2023 Executive Order on AI pays special attention to fostering the development and implementation of "privacy-enhancing technologies" (PETs). Sec. 9 (Protecting Privacy) of the Order is focused almost entirely on the topic. The Office of Management and Budget, Federal Privacy Council, Interagency Council on Statistical Policy, Office of Science and Technology Policy, Secretary of Commerce, National Science Foundation all have designated roles to play. Federal regulatory agencies charged with developing and implementing privacy-enhancing technologies (PETs) in the President Biden Oct. 2023 Executive Order on AI The focus is entirely on the development and implemention of PETs by the federal government. There is no discussion on imposing any duties, requirements, or liabilities on the generative AI providers who contribute significantly to the threat to our private information. Let's explore why. II. Privacy-enhancing technologies (PETs) A. All named PETs focus on data repository and sharing protections The term “privacy-enhancing technology” means any software or hardware solution, technical process, technique, or other technological means of mitigating privacy risks arising from data processing, including by enhancing predictability, manageability, disassociability, storage, security, and confidentiality. These technological means may include secure multiparty computation, homomorphic encryption, zero-knowledge proofs, federated learning, secure enclaves, differential privacy, and synthetic-data-generation tools. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Sec. 3(z), Oct. 30, 2023, available here. Data disassociation or deidentification is a key concept for privacy-enhancing technologies. Data deidentification "unlinks" individuals from their sensitive information. Once personal identifiers are removed or transformed using data deidentification, the data can be reused and shared without implicating data privacy (presuming the data can't be re-paired with the individuals it was originally disassociated from, which AI is really, really good at doing). "Disassociability" is defined by NIST as: "[e]nabling the processing of [personally-identifiable information] or events without association to individuals or devices beyond the operational requirements of the system." A full discussion of the 7 different "technological means" for privacy-enhancing technologies (PETs) listed above is beyond the scope of this blog article.1 Like all security measures, they generally entail a trade-off between privacy and utility. It is, e.g., certainly technologically possible to encrypt all data. And it would be legally advantageous to do so, as virtually all states exempt you from any notification requirements in the event of a data breach when you do.2 But the general assumption remains that while highly sensitive information should be encrypted, it would be a bridge too far to impose such a requirement on all just-sorta-sensitive information. The concern is it would be cost-prohibitive to do so and/or it would slow down processing speeds for systems too much.3 Notably, all 7 listed technological means focus on the data repository side. They are repository and sharing protections that can only be implemented by the organizations that manage person-related data for their businesses. B. No PETs focus on personally-identifiable information (PII) detection But what about technologies or practices that the AI providers who actively collect massive amounts of data to train their models—through webcrawlers or otherwise—can implement? Shouldn't AI providers bear responsibility for: mitigating against the collection of private information in the first instance, and/or scrubbing data collected of such private information after the fact? The closest the Biden Oct. 2023 Executive Order gets to this is: Sec. 9.  Protecting Privacy.  (a)  To mitigate privacy risks potentially exacerbated by AI — including by AI’s facilitation of the collection or use of information about individuals, or the making of inferences about individuals — the Director of OMB shall: (i) evaluate and take steps to identify commercially available information (CAI) procured by agencies, particularly CAI that contains personally identifiable information and including CAI procured from data brokers and CAI procured and processed indirectly through vendors ... This responsibility is as top-down as it gets. It is directed at the federal government and its collection or use of any "commercially available information (CAI) procured by agencies, identifiable information and including CAI procured from data brokers and CAI procured and processed indirectly through vendors...." The Director of Office of Management and Budget is directed to "evaluate and take steps to identify" such CAI and to issue a Request for Information and develop privacy impact assessments to mitigate against resulitng privacy risks, "including those that are further exacerbated by AI."4 And as to those who are doing the exacerbating—the generative AI providers crawling the internet to train their AI models? The Executive Order is notably silent toward them and to this entire issue. I acknowledge that it is objectively unfair on some levels to impose liability for the collection of publicly available information. It, however, is also objectively unfair on every level to the public that a single unauthorized release of your social security number, bank information etc. should doom you to having that information permanently publicly available due to the inexorable work of generative AI webcrawlers and model building. In particular since generative AI providers have the perfect tool to screen for and identify after the fact private information in the massive amounts of data that they collect—their very own AI technology.5 III. Conclusion Perhaps that side of the coin is actually being addressed in full elsewhere, and we shouldn't rush to judgment here as to our federal government's efforts here. But there is no right to data privacy in the U.S.6 And there are no laws meaningfully limiting the collection of private information by generative AI providers, requiring them to screen out such private information after collecting it, or preventing them from reselling such information after collecting it. The general rule after all is that if it is publicly available on the internet, then anyone has the right to collect and do what they want with it. A strong case can be made that this is generally how it should be. This general rule, however, becomes problematic when applied to the information that was acquired illegally by a third person who then posts it on the internet. And when generative AI providers and implementers release their webcrawlers with full knowledge that they are collecting such data that no one would want to have publicly released and that it was likely originally acquired and posted from such an illegal data breach or other action without the consent of the "owner" of the data. Unless laws are eventually passed directly addressing these issues, there really isn't any reason for generative AI providers to do anything here, is there? The simple reality is that left unchecked, their incentives are to do just enough to support what they really want—no laws to be passed on this issue at all or better yet laws favorable to their positions, so they can continue to minimize any legal liability they might otherwise become subject to. © 2024 Wood Phillips 1 For the best high-level technological discussion of the subject that I have come across, see Katharine Jarmul, Privacy Enhancing Technologies: An Introduction for Technologists, martin.Fowler.com, May 30, 2023, available here. 2 See The Sedona Conference, Incident Response Guide, 21 Sedona Conf. J. 125, 182-83 (2020). 3 See Rebecca Herold, Top 4 Reasons Encryption Is Not Used, Privacy & Security Brainiacs, March 21, 2020, available here. 4 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Sec. 9(i)-(ii), Oct. 30, 2023, available here. 5 A Google search for "PII detection" and the reams of hits it yields strongly suggests that a lot could be done on this front if so required.... 6 For discussion, see my blog article Your duties to your AI customers and their private data, available here.

  • Algorithmic discrimination: good luck with that

    Bias in, bias out: The algorithmic discrimination challenge. Of all the intractable issues that AI gives rise to, algorithmic discrimination is in my view the most unsolvable. It resides at the convergence of: civil rights and race, gender, and other relations—the biggest political football(s) of our times; and how AI works—e.g., it is trained on data that itself is biased. READ MORE → © 2024 Wood Phillips

  • What the Oct 2023 Executive Order on AI ducks.

    The Oct. 2023 Executive Order on AI: You missed a spot.... While more aspirational than specific, the Oct. 2023 Executive Order on AI was a step in the right direction for addressing these issues. But the Executive Order punts on or omits entirely some key issues.... READ MORE → © 2024 Wood Phillips

  • Guide to U.S. federal agency regulation of AI

    U.S. federal regulation of AI: A visual guide. The below chart is a visual guide to the various U.S. executive agencies’ regulation of artificial intelligence by subject area, based in part on President Biden’s October 2023 Executive Order on AI. Since the focus of the Order is on “the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” it is (perhaps) not… READ MORE → © 2024 Wood Phillips

  • Use of copyrighted works to train your AI

    The law and ethics of generative AI use of copyrighted works. My deeply profound guiding Principle No. 1 for Responsible AI (“Don’t be scum”) starts not being quite so helpful right around here. My IP head and my IP heart diverge here when analyzing generative AI use of copyrighted works to train its AI models. This is where law and ethics collide. Let’s look… READ MORE → © 2024 Wood Phillips

  • Managing your AI customer data

    Your duties to your AI customers and their private data. Any discussion of “responsible AI” and your use of AI customer data simply cannot start with the pretense: “It’s currently not illegal, so we’re good!” As noted in our discussion of deepfake pornography last week, the law simply hasn’t caught up with the technology. Perhaps the same can be said for data privacy issues. But… READ MORE → © 2024 Wood Phillips

  • Legal and ethical issues of bypassing internet paywalls to train your AI

    Your guide to responsible AI contracting. Part 1: Don't be scum. What is “responsible AI”? Most people would agree we should implement AI responsibly and appropriately balance the interests of each stakeholder and society. But AI may be the most disruptive innovation in history, promising to increase overall productivity and displace workers to an unprecedented degree. Figuring out what the proper balance is not easy.… READ MORE → © 2024 Wood Phillips

  • Government regulation might limit the use of trade secrets to protect your AI

    Are we sure trade secrets are the way to go for protecting your AI? Trade secrets for AI are at risk due to potential regulatory measures by the U.S government. The President’s 2023 Executive Order on AI does not address its impact on trade secret protections for AI. AI innovations are often kept as trade secrets, especially for AI providers who use SaaS, which makes reverse engineering impossible. However,… READ MORE → © 2024 Wood Phillips

  • Successful AI implementation will be critical for all businesses in the future

    [Without intellectual property (IP), you are replaceable,...] ...and without implementing AI successfully, you will be replaced. Hyperbole? No, at least not for many industries. Businesses and individuals who figure out how to implement AI successfully will operate far more efficiently than those who don’t. And the already starving artist is facing its greatest existential crisis; there’s seemingly nothing to do but protest. Employee displacement by generative AI—which generates high-quality text, images, … READ MORE → © 2024 Wood Phillips

  • IP is at the core of all companies

    Without intellectual property (IP), you are replaceable.... If your business has no IP, you are replaceable. By any of your competitors. How much of your potential patent, trademark, copyright, and trade secret rights (the 4 main categories of IP) have you realized? This article is the definitive visual guide for understanding IP and your tech company. It provides a strategic roadmap for… READ MORE -> © 2024 Wood Phillips

  • Protecting your IP rights when using AI to innovate

    Your employees’ use of AI in developing your company’s products or services puts your company’s intellectual property rights at risk. According to a Salesforce Nov. 2023 survey: 55% of all employees have used unapproved generative AI tools at work, and 40% of all workplace generative AI users have used banned tools at work. But when your employees leverage generative AI—with and in particular without permission—as part of your company’s research and development or marketing processes, they put your company at risk by: A. Jeopardizing your company’s patent and trade secret rights through: 1. Public disclosure of your company’s proprietary information. All prompts your employees type in to a generative AI and all output that is generated may potentially be deemed public disclosures. This potentially destroys any proprietary rights you might have over the information and also potentially invalidates any IP rights over any innovations you may have developed based on it. 2. Loss of ownership of your company’s patent rights. Your employees’ use of generative AI in R&D inherently leaves your company subject to subsequent challenges that your inventive process did not have the requisite “significant contributions” from a human being to be eligible for a patent. B. Putting your company at risk of third-party copyright and other IP infringement claims. Your employees’ use of AI to generate text, images, music, etc. inherently puts your company at risk of third-party copyright and other infringement claims based on the AI’s use of copyrighted works to train its models. Mitigating against these risks—and against spiraling litigation costs spent countering such ownership, invalidity, and third-party infringement challenges—from the start will help you sidestep such future landmines. © 2024 Wood Phillips

bottom of page