share_log

Baker Tilly's Insight on How AI Is Revolutionizing the Healthcare and Life Sciences Industry

Baker Tilly's Insight on How AI Is Revolutionizing the Healthcare and Life Sciences Industry

貝克·蒂利對人工智能如何徹底改變醫療保健和生命科學行業的見解
Accesswire ·  03/25 21:10

NORTHAMPTON, MA / ACCESSWIRE / March 25, 2024 / Baker Tilly:

馬薩諸塞州北安普敦/ACCESSWIRE/2024 年 3 月 25 日/貝克·蒂利:

Authored by Arun Parekkat

由 Arun Parekkat 撰寫

Use of artificial intelligence (AI), including applications such as machine learning (ML), where AI software is trained to form its own decision-making criteria based on previous examples of a particular task in life sciences has the potential to transform how we better human health and conduct medical research. According to the Artificial Intelligence Report 2023 prepared by Stanford University in 2022, medical and healthcare was the AI focus area with the most investment with $6.1 billion.

人工智能(AI)的使用,包括機器學習(ML)等應用程序,在這些應用中,人工智能軟件經過訓練,以根據生命科學中特定任務的先前例子形成自己的決策標準,有可能改變我們改善人類健康和進行醫學研究的方式。根據斯坦福大學在2022年編寫的《2023年人工智能報告》,醫療和醫療保健是人工智能的重點領域,投資額爲61億美元。

To better understand the potential for how AI can revolutionize the life sciences industry, let's first explore the concept of AI.

爲了更好地了解人工智能如何徹底改變生命科學行業的潛力,讓我們首先探討人工智能的概念。

What is AI? A definitional treatment

什麼是人工智能?定義性治療

AI is a term that most of us are now familiar with, but its interpretation is extremely varied. At a high level, the term "artificial intelligence" encompasses the use of technology to perform tasks typically associated only with human beings, such as learning and decision making. This can be seen anywhere from "strong AI" or Artificial General Intelligence (AGI), which is where a machine could have intelligence equivalent to a human, to "weak AI," the version we are most familiar with, from voice assistants and driverless cars, where software is trained to perform focused and specific tasks.

人工智能是我們大多數人現在都熟悉的術語,但它的解釋千差萬別。總體而言,“人工智能” 一詞包括使用技術執行通常僅與人類相關的任務,例如學習和決策。從 “強人工智能” 或人工通用智能(AGI)(機器可以擁有與人類相當的智能),到我們最熟悉的 “弱人工智能”,從語音助手和無人駕駛汽車,在這些版本中,軟件經過訓練,可以執行有針對性的特定任務。

According to the AI Act, EU regulatory framework for AI - 2021, AI is defined as "software that is developed with techniques and approaches that can generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with." Annex I of the act outlines approaches such as ML, explicit logic-based approaches as well as more general statistical techniques.

根據歐盟2021年人工智能監管框架《人工智能法案》,人工智能被定義爲 “採用技術和方法開發的軟件,這些技術和方法可以生成內容、預測、建議或影響其交互環境的決策等輸出。”該法的附件一概述了機器學習、基於邏輯的明確方法以及更通用的統計技術等方法。

The current U.S. Food and Drug Administration (FDA) definition of AI describes it as the "science and engineering of making intelligent machines, especially intelligent computer programs." Wherein, AI can use different techniques, including models based on statistical analysis of data, expert systems that primarily rely on if-then statements, and ML. The FDA also states that ML is a subset technique of AI that can be used to design and train software algorithms to learn from and act on data. Software developers can use ML to create an algorithm that is 'locked' so that its function does not change, or 'adaptive' so its behavior can change over time based on new data.

美國食品藥品監督管理局 (FDA) 目前對人工智能的定義將其描述爲 “製造智能機器,尤其是智能計算機程序的科學和工程”。其中,人工智能可以使用不同的技術,包括基於數據統計分析的模型、主要依賴於if-then語句的專家系統以及機器學習。美國食品和藥物管理局還指出,機器學習是人工智能的子集技術,可用於設計和訓練軟件算法,以從數據中學習和採取行動。軟件開發人員可以使用機器學習創建 “鎖定” 算法,使其功能不變,或者使用 “自適應” 算法,使其行爲可以根據新數據隨着時間的推移而發生變化。

What does AI mean in medical technology (medtech) and in pharmaceutical terms?

人工智能在醫療技術(medtech)和製藥術語中意味着什麼?

Given the significant expenditure associated with drug development and delivery for burgeoning global populations, it is unsurprising that AI is sought as a tool to increase productivity and efficiency in healthcare. We must go back to 1995 to find the first ML technology approved by the FDA. Since then, more than 500 medical devices have gained 510k status aiding image analysis and diagnosis of diseases such as cancer with AI led software devices. These devices seek to optimize delivery of surgery and post-operative care for orthopedic patients receiving an implant such as an artificial hip.

鑑於與新興全球人口的藥物研發和交付相關的巨額支出,人們尋求將人工智能作爲提高醫療保健生產力和效率的工具也就不足爲奇了。我們必須追溯到 1995 年,才能找到第一項獲得 FDA 批准的機器學習技術。從那時起,已有500多臺醫療設備獲得了51萬臺的地位,通過人工智能主導的軟件設備幫助對癌症等疾病進行圖像分析和診斷。這些設備旨在優化接受人工髖關節等植入物的骨科患者的手術和術後護理。

Within pharmaceutical research and development, there are multiple instances where AI is being used across the discovery and development pipeline. The first drugs to have been developed "in silico," or by computer, are now entering human clinical trials. In its broadest sense, AI is being used to improve the identification of candidate molecules and aiding in the recruitment and retention of patients for Phase I to III clinical trials. For marketed drugs, AI technology such as Large Language Model chatbots are being used as symptom assessment tools to improve awareness of rare diseases amongst the public and primary care providers.

在藥物研發中,有多種在發現和開發管道中使用人工智能的情況。第一批通過 “計算機模擬” 或計算機開發的藥物現已進入人體臨床試驗。從最廣泛的意義上講,人工智能被用來改善候選分子的識別,並幫助招募和留住一至三期臨床試驗的患者。對於上市藥物,大型語言模型聊天機器人等人工智能技術被用作症狀評估工具,以提高公衆和初級保健提供者對罕見疾病的認識。

Nature of managing the risk of AI - an assessment

管理人工智能風險的本質——評估

While the promises of AI in life sciences are undeniable, potential issues and ethical considerations warrant much attention. Data privacy and security is a concern as AI relies on vast amounts of sensitive patient data. Thus, ensuring the confidentiality and protection of this information is paramount. The potential for data breaches or unauthorized access poses significant risks to patient privacy and could erode public trust in AI-driven healthcare solutions.

儘管人工智能在生命科學中的前景不可否認,但潛在的問題和倫理考慮值得高度關注。由於人工智能依賴大量敏感的患者數據,因此數據隱私和安全性是一個問題。因此,確保這些信息的機密性和保護至關重要。數據泄露或未經授權訪問的可能性對患者隱私構成重大風險,並可能削弱公衆對人工智能驅動的醫療解決方案的信任。

Another ethical challenge pertains to the "black box" nature of some AI algorithms, especially given the importance in the life sciences industry to be able to consistently explain the clinical benefit and value of a product with clear understandable evidence. Complex ML models may arrive at conclusions without providing transparent explanations for their decisions. In the medical world, where accountability and transparency are crucial, potential biases and lack of interpretability could pose ethical problems. Clinicians and regulators need to understand how AI arrives at its conclusions to ensure patient safety and maintain high ethical standards.

另一個倫理挑戰與某些人工智能算法的 “黑匣子” 性質有關,特別是考慮到生命科學行業必須能夠用清晰易懂的證據持續解釋產品的臨床益處和價值。複雜的機器學習模型可能得出結論,而不會爲其決策提供透明的解釋。在問責制和透明度至關重要的醫學界,潛在的偏見和缺乏可解釋性可能會帶來倫理問題。臨床醫生和監管機構需要了解人工智能如何得出結論,以確保患者安全並維持較高的道德標準。

Solutions such as data de-identification have been proposed to address some data privacy issues that may come from AI in healthcare and life sciences. Proposals for addressing data privacy concerns in AI for healthcare and life sciences include de-identification of data, which involves controlling data access based on patient consent and tracking usage purposes over time. Additionally, strategies such as encryption, differential privacy (sharing group attributes without revealing individual ones), federated learning (avoiding centralized data aggregation), and data minimization (limiting personal data based on application scope) may also address the concern of data privacy.

已經提出了諸如數據去識別之類的解決方案,以解決醫療保健和生命科學領域可能來自人工智能的一些數據隱私問題。解決醫療保健和生命科學人工智能中數據隱私問題的提案包括去識別數據,這涉及根據患者同意控制數據訪問以及跟蹤一段時間內的使用目的。此外,諸如加密、差異隱私(在不泄露個人屬性的情況下共享群組屬性)、聯邦學習(避免集中數據聚合)和數據最小化(根據應用範圍限制個人數據)等策略也可以解決數據隱私問題。

A key development has been European Union's AI Act. A first of its kind regulatory framework for AI in which AI systems are analyzed and classified according to the risk they pose to users. It creates a risk pyramid with an outright ban for certain AI applications, stringent requirements for AI systems classified as high risk, and a more limited set of (transparency) requirements for AI applications with a lower risk. The stated goal of the EU's AI act is "a balanced and proportionate approach limited to the minimum necessary requirements to address the risks linked to AI without unduly constraining technological development."

一項關鍵進展是歐盟的《人工智能法》。這是人工智能監管框架中的第一個,在該框架中,根據人工智能系統對用戶構成的風險進行分析和分類。它形成了一個風險金字塔,徹底禁止某些人工智能應用程序,對歸類爲高風險的人工智能系統提出了嚴格的要求,對風險較低的人工智能應用程序提出了更有限的(透明度)要求。歐盟人工智能法案的既定目標是 “採取平衡和相稱的方法,僅限於最低必要要求,以應對與人工智能相關的風險,同時不對技術發展造成不當限制。”

Risk assessments in AI-enabled healthcare products

支持人工智能的醫療保健產品的風險評估

While there is considerable interest from clinicians and regulators in being able to understand to what degree AI is being utilized within a product, particularly where it concerns the long-term efficacy and accuracy of 'adaptive' technologies, fundamental product development and engineering principles apply equally to devices which use 'true' AI as those which do not.

儘管臨床醫生和監管機構對能夠了解產品中人工智能的使用程度,尤其是涉及 “自適應” 技術的長期療效和準確性的設備表現出濃厚的興趣,但基本的產品開發和工程原理同樣適用於使用 “真正” 人工智能的設備,也適用於不使用 “真正” 人工智能的設備。

Assessing whether a product is using AI as per the various regulations under consideration and in place across the globe, and subsequently the measures needed to robustly assess it for its readiness to meet market approval requirements, will need to include the underlying software prediction models, the data used to train and validate it as well as the way it is delivered to relevant end users (whether HCPs or individual patients) within a clearly described patient journey. Recent FDA guidance has provided greater clarity around how manufacturers can safely build adaptive products which have the capacity to learn and improve as they are exposed to increasing volumes of data once placed on the market.

評估產品是否根據正在考慮和在全球範圍內實施的各種法規使用人工智能,以及隨後採取有力評估其是否準備好滿足市場批准要求所需的措施,將需要包括基礎軟件預測模型、用於訓練和驗證該產品的數據,以及在明確描述的患者旅程中將其交付給相關最終用戶(無論是HCP還是個體患者)的方式。美國食品和藥物管理局最近的指導方針進一步明確了製造商如何安全地製造自適應產品,這些產品一旦投放市場,就會面臨越來越多的數據,從而有學習和改進的能力。

Developers can take comfort in the fact that clear intended use definitions, use of robust evidence-based methodologies, and total product lifecycle approaches, including Agile, that are applied during development and post-launch will remain the cornerstones of the regulatory standard.

開發人員可以欣慰地看到,在開發和發佈後採用的明確的預期用途定義、穩健的循證方法的使用以及包括敏捷在內的整個產品生命週期方法,仍將是監管標準的基石。

The task of identifying and assessing risks that could arise specifically from AI technologies is not trivial and the following paper will explore this domain in more detail, with examples that can help manufacturers ensure safety, efficacy as well benefits unique to this technology such as personalization, autonomy and mitigations for bias including healthcare inequalities.

識別和評估人工智能技術可能特別產生的風險並非易事,以下論文將更詳細地探討這一領域,舉例說明可以幫助製造商確保該技術的安全性、有效性以及特有的優勢,例如個性化、自主權和緩解包括醫療保健不平等在內的偏見。

For more insights, visit Baker Tilly's healthcare & life sciences page.

欲了解更多見解,請訪問貝克·天利的醫療保健和生命科學頁面。

View additional multimedia and more ESG storytelling from Baker Tilly on 3blmedia.com.

在 3blmedia.com 上查看 Baker Tilly 講述的更多多媒體和更多 ESG 故事。

Contact Info:

聯繫信息:

Spokesperson: Baker Tilly

發言人:貝克·蒂利

SOURCE: Baker Tilly

來源:Baker Tilly


譯文內容由第三人軟體翻譯。


以上內容僅用作資訊或教育之目的,不構成與富途相關的任何投資建議。富途竭力但無法保證上述全部內容的真實性、準確性和原創性。
    搶先評論