In Ex parte Desjardins, Appeal 2024-000567 (Review Panel of the PTAB, Sept. 26, 2025), the PTAB Appeals Review Panel (“Review Panel”) vacated a decision of the PTAB that had sua sponte entered a new ground of rejection of claims relating to artificial intelligence (AI) systems under 35 U.S.C. § 101. The new Under Secretary of Commerce and USPTO Director, John Squires, authored the opinion. Procedurally, the Review Panel intervened after the Applicant requested a rehearing, which was denied. The Review Panel’s decision suggests that the USPTO under Director Squires may shift toward a more favorable stance on subject matter eligibility.
The claim in question related to training machine learning models:
1. A computer-implemented method of training a machine learning model,
wherein the machine learning model has at least a plurality of parameters and has been trained on a first machine learning task using first training data to determine first values of the plurality of parameters of the machine learning model, and
wherein the method comprises:
determining, for each of the plurality of parameters, a respective measure of an importance of the parameter to the first machine learning task, comprising:
computing, based on the first values of the plurality of parameters determined by training the machine learning model on the first machine learning task, an approximation of a posterior distribution over possible values of the plurality of parameters,
assigning, using the approximation, a value to each of the plurality of parameters, the value being the respective measure of the importance of the parameter to the first machine learning task and approximating a probability that the first value of the parameter after the training on the first machine learning task is a correct value of the parameter given the first training data used to train the machine learning model on the first machine learning task;
obtaining second training data for training the machine learning model on a second, different machine learning task; and
training the machine learning model on the second machine learning task by training the machine learning model on the second training data to adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task,
wherein adjusting the first values of the plurality of parameters comprises adjusting the first values of the plurality of parameters to optimize an objective function that depends in part on a penalty term that is based on the determined measures of importance of the plurality of parameters to the first machine learning task.
In its analysis, the Review Panel followed the PTO’s two-step framework based on Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, (2014), and Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66 (2012). See Ex parte Desjardins at 5-9; see also M.P.E.P. § 2106. The Review Panel combined Step 1 and Step 2A Prong 1. Id. at 6-7. There, it found that the Board had determined that the claim feature “computing…, an approximation of a posterior distribution over possible values of the plurality of parameters” recited a mathematical calculation, and therefore recited an abstract idea. Id. at 6. Appellant did not dispute this in its Request for a Rehearing, and thus the Review Panel accepted it.
At Step 2A Prong 2, the Review Panel cited to Applicant’s Specification, which explains the technical advantages achieved by the described invention. Specifically, it describes how training a single machine learning model on multiple tasks allows the model to perform well on each task without the need to store and manage multiple separate models for each task. This unified approach results in reduced storage requirements and decreased system complexity, as only one set of model parameters must be maintained. The specification further details that, when adapting the model to new tasks, the model parameters are adjusted in consideration of their importance to previously learned tasks. This methodology enables the model to effectively acquire new capabilities (i.e., learn new tasks) while retaining knowledge from prior tasks, thereby preventing catastrophic forgetting. These practical benefits demonstrate that the claimed invention provides improved system efficiency and addresses real-world challenges in machine learning deployment, specifically in environments where memory and computational resources are limited, and where continual learning is required. More particularly, paragraph 0021 of the specification explained:
Particular embodiments of the subject matter described in this specification can be implemented so as to realize one or more of the following advantages. By training the same machine learning model on multiple tasks as described in this specification, once the model has been trained, the model can be used for each of the multiple tasks with an acceptable level of performance. As a result, systems that need to be able to achieve acceptable performance on multiple tasks can do so while using less of their storage capacity and having reduced system complexity. For example, by maintaining a single instance of a model rather than multiple different instances of a model each having different parameter values, only one set of parameters needs to be stored rather than multiple different parameter sets, reducing the amount of storage space required while maintaining acceptable performance on each task. In addition, by training the model on a new task by adjusting values of parameters of the model to optimize an objective function that depends in part on how important the parameters are to previously learned task(s), the model can effectively learn new tasks in succession whilst protecting knowledge about previous tasks.
In view of this passage in the specification, the Review Panel agreed with the Appellant’s arguments, and stated, “The Specification also recites that the claimed improvement allows artificial intelligence (AI) systems to ‘us[e] less of their storage capacity’ and enables ‘reduced system complexity.’” Ex parte Desjardins at 9, citing Specification paragraph 0021.
In addition, the Review Panel gave policy arguments. Namely, the Review Panel warns that broadly excluding AI innovations from patent protection risks undermining the United States’ leadership in the rapidly advancing field of AI. By treating all machine learning as mere unpatentable algorithms and dismissing other aspects as generic computer components, patent examiners may unjustifiably deny patent eligibility to valuable AI inventions. The Review Panel argues that such oversimplified and generalized evaluations could stifle innovation and progress in AI, and recommends more careful, nuanced patent examination practices that recognize the complexity and significance of AI technologies, stating:
Categorically excluding AI innovations from patent protection in the United States jeopardizes America’s leadership in this critical emerging technology. Yet, under the panel’s reasoning, many AI innovations are potentially unpatentable—even if they are adequately described and nonobvious—because the panel essentially equated any machine learning with an unpatentable ‘algorithm’ and the remaining additional elements as ‘generic computer components,’ without adequate explanation. Examiners and panels should not evaluate claims at such a high level of generality.
(Id., internal citation omitted)
Thus, at least at the PTO level, under John Squires, it seems that the pendulum will swing back in favor of subject matter eligibility for AI inventions. However, the Federal Circuit has reiterated its longstanding position that the M.P.E.P. and related PTO guidance, while instructive for examiners, are not controlling on the courts. Shortly after the Ex parte Desjardins decision, the Federal Circuit, in Rideshare Displays, Inc. v. Lyft, Inc., (Fed. Cir., Sept. 29, 2025), stated:
The Board used the USPTO’s 2019 Revised Patent Subject Matter Eligibility Guidance and the October 2019 Update, which uses a three-pronged framework. The Guidance is available at…We decline to adopt this framework, which is not binding on this Court, and instead evaluate the Board’s decision under our precedent, which follows the two-step test set out in Alice. See Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1334 (Fed. Cir. 2016).
(Emphasis added)
Nevertheless, there may be some important takeaways for patent practitioners. First, this case is a reminder to fully describe technical improvements in the specification. Further to this point, it is important to spell out the architecture or computer functionality being improved (e.g., data structures, memory management, etc.). To build upon this, these improvements should be tied to technical outcomes (e.g., speed, scalability, reduced complexity, etc.).
Second, when drafting claims, focus on how the invention achieves its results rather than merely stating the desired outcome. Ex parte Desjardins shows that including concrete data-handling steps, model-update processes, or system operations may demonstrate a practical application, whereas purely results-orientated language (e.g., “optimizing model performance”) invites a 35 U.S.C. § 101 rejection.
