AI expert Michael Fischer explores how the standardization process in the field of AI is further developing and the importance of having a strong patent strategy.

When Canadian telecommunications giant “Nortel” went bankrupt in 2009, not everything was lost. The assets that remained were a huge patent portfolio which was auctioned in 2011. Although these assets were purely intangible, the patent portfolio reached a price of $4.5 billion for around 6000 patents and patent applications. Nortel described the package as: “… more than 6,000 patents and patent applications [covering] wireless, wireless 4G, data networking, optical, voice, Internet, service provider, semiconductors and other patents”[1]. For those involved in building up these assets, it was encouraging to see that their work created value for the company and for the economy as a whole. The reason why patents in the field of telecommunications reach such high prices is that many of the patents cover telecommunication standards and are therefore considered to be “standard-essential” which means that each company that wants to or is forced to comply with the standards necessarily infringes the patent which means that it has to pay royalties, leading to further legal discussion such as the much cited FRAND negotiations.

So far, the situation in telecommunications regarding the interplay between standards and patents has been unique but the situation may be about to change. With AI technology becoming more and more mature, the desire to subject this technology to standards increases. AI technology is sometimes conceived as a “black box” since it is not always understandable why an AI technology produces a certain output for a certain input and therefore can still meet a degree of scepticism in safety-relevant fields of technology. “Explainable AI”, also referred to as XAI, is a research field within the field of AI which now receives a lot of attention and aims at getting deeper insights and understanding in the internal functioning of AI technology in order to increase the safety and reliability of AI technology. It should be noted that the algorithms themselves are not always the decisive elements. It is often the datasets with which the algorithms are fueled which have to be analysed more closely. For example, in the future there may be standard training data sets with which the algorithms for a medical diagnosis app have to be trained to ensure that they comply with standards. The introduction of standards in the field of AI is not primarily motivated by the patent system. It is rather safety and reliability but also ethical concerns that drive the introduction of standards in this field. However, as a side effect, this may lead to standard essential patents giving patent proprietors huge advantages in the market.

Further algorithmic development can of course contribute to the advancement of AI in different fields of technology. For companies that are involved in the development in new AI technologies obtaining a patent for these technologies can be an interesting asset in particular if a standardization body later comes to the decision to make this particular technology part of a standard. Since a patent will only be granted to that person or company that first files a patent application, it is important to be quick and have in mind the potential that lies in the interplay of AI, standards and patents. There is often a misconception that “software is not patentable” which is misleading and only applies to algorithms or mathematical methods “as such”. However, an algorithm which is applied to a technological field and brings about a technical effect is normally considered to be patentable.

To underpin the importance of ‘Explainable AI’ in the field of standards, the National Institute of Standards and Technology (‘NIST’) has announced on its website that it is inviting stakeholders to submit their comments on the first draft of the ‘Four Principles of Explainable Artificial Intelligence (NISTIR 8312)’, a white paper[2] that aims at defining the principles that capture the fundamental properties of explainable AI systems. (Those who are familiar with standardization processes will be not surprised to see that page four of the paper includes “a call for information on essential patent claims”.) NIST will be accepting comments until October 15, 2020.[3]

In October 2019, the German Institute for Standardization (DIN) organised a kick-off meeting to present and discuss their plan for an AI standardization. The event was attended by more than 300 experts and stakeholders from industry, politics, the scientific research community, the society as a whole and also patent attorneys from Venner Shipley LLP had the chance to attend the event. The AI standardization roadmap aims at quickly creating a framework for action for standardization in the field of AI that will strengthen the global competitiveness of German industry.[4] While it is not yet clear on which level an AI standard will be defined (high level – standards on the AI technology development process and quality criteria, or low level – standards on the algorithm itself), initial documents have already been published.[5]

While national efforts to establish AI standards are to be welcomed, it is clear that standardization nowadays happens on an international level. What is even more important due to its international impact and what has created considerable media attention is the creation of working committee “ISO/IEC JTC 1/SC42” of the International Standards Organization (ISO) in 2017 which is the first international standards committee that is looking at the entire AI ecosystem. As stated on its website, JTC 1/SC 42 will “serve as the focus and proponent for the JTC 1 AI standardization programme and provide guidance to JTC 1, IEC, and ISO committees developing AI applications”.[6] For stakeholders in the field of AI it is worthwhile observing what this working committee is doing or even getting involved in its activities. This may help to foresee the development in the standardization of AI (and adapt its patent strategy accordingly) or even try to influence the standardization process.

Take home message

While big IT companies are familiar with patenting their innovations, it is more often smaller start-ups that do not have enough financial funds or who do not prioritise developing an IP strategy. (Also large pharmaceutical or chemical companies having a lot of experience in patenting their pharmaceutical or chemical inventions may sometimes lack awareness in protecting their AI innovations.) This can be a drawback since it is often the patents that distinguish one start-up from another. Think of an investor who wants to decide whether to invest his money in one AI start-up which has applied for patents or another AI start-up which has not. It is also recommended to closely observe how the standardization process in the field of AI is further developing. It will be interesting to see if standards in the field of AI will develop in the same way and reach the same importance as they have in the field of telecommunications.


[1]https://www.fulcrum.com/nortel_bankruptcy_patent_auction/

[2]https://www.nist.gov/system/files/documents/2020/08/17/NIST%20Explainable%20AI%20Draft%20NISTIR8312%20%281%29.pdf

[3]https://www.nist.gov/topics/artificial-intelligence/ai-foundational-research-explainability

[4]https://www.din.de/en/din-and-our-partners/press/press-releases/artificial-intelligence-requires-standards-and-specifications–688888

[5]https://www.din.de/en/wdc-beuth:din21:326795941/toc-3183949/download

[6]https://www.iso.org/committee/6794475.html