Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You and Your Team.

Learn More →

Machine Learning and the Profession of Medicine

Machine Learning and the Profession of Medicine Must a physician be human? A new computer, “Ellie,” developed at the Institute for Creative Technologies, asks questions as a clinician might, such as “How easy is it for you to get a good night’s sleep?” Ellie then analyzes the patient’s verbal responses, facial expressions, and vocal intonations, possibly detecting signs of posttraumatic stress disorder, depression, or other medical conditions. In a randomized study, 239 probands were told that Ellie was “controlled by a human” or “a computer program.” Those believing the latter revealed more personal material to Ellie, based on blind ratings and self-reports.1 In China, millions of people turn to Microsoft’s chatbot, “Xiaoice,”2 when they need a “sympathetic ear,” despite knowing that Xiaoice is not human. Xiaoice develops a specially attuned personality and sense of humor by methodically mining the Internet for real text conversations. Xiaoice also learns about users from their reactions over time and becomes sensitive to their emotions, modifying responses accordingly, all without human instruction. Ellie and Xiaoice are the result of machine learning technology. The profession of medicine has a tremendous opportunity and an obligation to oversee the application of this technology to patient care. To do so, medicine, which is only now becoming comfortable with the electronic medical record, needs to continue with the digital revolution. Business has already embraced this degree of innovation, leveraging the momentum in the technology sector that is driven by the confluence of 3 key factors: first, massive amounts of available data; second, advances in computational processing, including developments in distributed computing; and third, breakthroughs in machine learning, in which computational algorithms learn from raw data, without human instruction. Machine learning algorithms personalize search engines, keep spam out of email inboxes, and steer self-driving cars. Computational capacity and machine learning allow massive amounts of data to be analyzed rapidly, laying the groundwork for personalized medicine mediated by technology. Previously unimaginable opportunities to apply machine learning to the care of individual patients in medical practice are currently within grasp. Machine learning algorithms can accommodate different configurations of raw data, assign context weighting, and calculate the predictive power of every combination of variables available to assess diagnostic and prognostic elements. The applications of machine learning in medicine have been groundbreaking, especially in imaging. Image recognition algorithms have discovered new tumor features relevant to prognosis, contributing to new knowledge in pathology.3 These algorithms also can handle risk profiles that are highly individualized, allowing analysis of disorders with multiple etiologies and of incomplete data, as is typical in real clinical settings. Using decision trees and nested analytic structures, clinicians can then extract the minimum data necessary to make a diagnosis or therapeutic recommendations. For example, a feature selection algorithm reduced the number of items (from 29 to 8) necessary for diagnosing autism spectrum disorder (ASD) with 100% accuracy among 612 patients with ASD. The classifier was then tested with samples of 110 and 336 patients and identified all but 2 individuals with ASD, representing 99.7% sensitivity and 94% specificity.4 Potentially, computer-assisted assessment in ASD will reduce the time needed to make an accurate diagnosis and improve patient outcomes, as has been achieved in the care of patients with cancer.5 In the past, much of medicine relied on population-wide diagnostic models that performed well, on average. Presently, the goal is the generation of precise diagnoses and treatment strategies using highly individualized data in the care of a specific patient. Global adoption of mobile and wearable technology has added yet another dimension to machine learning, allowing the uploading of large amounts of personal data into learning algorithms. Now, within closed-loop feedback systems, mobile technology (eg, a smartphone) is not just a biometric device (eg, measuring blood glucose levels) but ultimately could become a platform from which to deliver tailored interventions based on algorithms that continually optimize for personal information in real time. Available for many years, implantable cardioverter-defibrillators have saved lives by using algorithms to detect ventricular fibrillation and immediately deliver a defibrillating shock to the heart. Now, wearable devices promise to improve diabetes care—a small glucose meter adherent to the upper arm can regularly sample glucose levels, which are then wirelessly fed to the patient’s smartphone to inform the patient and treating physician. A randomized controlled trial enrolling 247 patients demonstrated that a glucose sensor in a closed loop with an insulin pump reduced nocturnal hypoglycemia (area under the curve reduced by 37.5% in the sensor group compared with the control group, P < .001) by “suspending” insulin pumping below a glucose threshold.6 Mobile technology also can track everyday language, and natural language processing and sentiment analysis can then extract patterns and meaning. For instance, a cell phone passively listening to a person’s voice might recognize speech patterns associated with “thought disorder,” thus identifying the beginning of cognitive impairment or a psychotic episode. A machine learning algorithm detected abnormal speech features in free-text samples, quarterly for 2.5 years, from 34 adolescents at high risk for psychosis. With 100% accuracy, the algorithm identified 5 individuals who later developed psychosis, surpassing clinical ratings.7 In depression, negative cognitions (eg, “things are hopeless”) might be identified. A smartphone might possibly forewarn of a major depressive episode by spotting an increased frequency of these distortions in a person’s language, correlating with decreased texting to friends and staying at home based on geolocation. Natural language processing may benefit not only the care of individuals but also populations. An algorithm using data mined from more than 5 million Twitter posts predicted a flu outbreak with an accuracy of 0.89 and outperformed state-of-the-art methods such as analysis of flu-related web searches, especially at early stages, because it successfully filtered out messages posted by individuals who did not actually have the flu.8 The capacity to collect vast personal data and to integrate those data into individualized predictive models has the potential to improve precise and early disease detection and invite the salutary health outcomes that accompany appropriate intervention. Yet innovation of this nature comes with risk. For example, Samaritans Radar, a text-based app released by a British suicide prevention organization, processed posts on Twitter, used machine learning to detect phrases predictive of suicidal behavior, and sent messages to followers of the posting individual so they might obtain help. The app was removed from the market just 1 week after its release amid criticism that it had the potential to place fragile individuals at greater risk from cyberbullies.9 The risks connected with granular tracking and recording of personal data gathered in the context of everyday life may turn out to involve less protection of privacy and greater potential for exploitation of human vulnerability. The potential for misuse of human health information is disquieting. Consider that user agreements for some apps have had clauses that allow a person’s smartphone to passively record conversations without permission. As machine learning enters state-of-the-art clinical practice, medicine thus has the immense obligation to ensure that this technology is harnessed for societal and individual good, fulfilling the ethical basis of the profession. The world has entered a period of unprecedented innovation, bringing a wealth of possibilities to clinical medicine. With this extraordinary opportunity, however, comes the obligation of the medical profession to serve the human good. Ethical design thinking is essential at every stage of development and application of machine learning in advancing health. Toward this aim, physicians with integrity and sophistication should partner closely with computer and data scientists to reimagine clinical medicine and to anticipate its ethical implications. It is important to systematically validate data from mobile health and consumer-facing technologies, particularly for cases in which dynamic intervention is provided. Physicians should initiate collaboration and be judicious in their approach to industry relationships so that technology may progress in a manner that upholds the social trust in medicine. In tandem with innovation, academic physicians must create opportunities for rigorous empirical validation of new technologies against clinical outcomes.Certainly, leaders throughout medicine must endeavor to create novel ethical approaches sufficient to address emerging dilemmas inherent to use of machine learning technologies in furthering health. Back to top Article Information Corresponding Author: Alison M. Darcy, PhD, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, 401 Quarry Rd, Stanford, CA 94305-5719 (adarcy@stanford.edu). Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Roberts reported that she is owner of Terra Nova Learning Systems. No other authors reported disclosures. References 1. Lucas GM, Gratch J, King A, Morency L-P. It’s only a computer: virtual humans increase willingness to disclose . Comput Human Behav. 2014;37:94-100.Google ScholarCrossref 2. Markoff J, Mozur P. For sympathetic ear, more Chinese turn to smartphone program. New York Times. July 31, 2015. http://www.nytimes.com/2015/08/04/science/for-sympathetic-ear-more-chinese-turn-to-smartphone-program.html. Accessed September 7, 2015. 3. Ye Q-H, Qin L-X, Forgues M, et al. Predicting hepatitis B virus-positive metastatic hepatocellular carcinomas using gene expression profiling and supervised machine learning . Nat Med. 2003;9(4):416-423.PubMedGoogle ScholarCrossref 4. Wall DP, Kosmicki J, Deluca TF, Harstad E, Fusaro VA. Use of machine learning to shorten observation-based screening and diagnosis of autism . Transl Psychiatry. 2012;2(4):e100.PubMedGoogle ScholarCrossref 5. Lisboa PJ, Taktak AFG. The use of artificial neural networks in decision support in cancer: a systematic review . Neural Netw. 2006;19(4):408-415.PubMedGoogle ScholarCrossref 6. Bergenstal RM, Klonoff DC, Garg SK, et al; ASPIRE In-Home Study Group. Threshold-based insulin-pump interruption for reduction of hypoglycemia . N Engl J Med. 2013;369(3):224-232.PubMedGoogle ScholarCrossref 7. Bedi G, Carrillo F, Cecchi GA, et al. Automated analysis of free speech predicts psychosis onset in high-risk youths . NPJ Schizophr. 2015. doi:10.1038/npjschz.2015.30.Google Scholar 8. Aramaki E, Maskawa S, Morita M. Twitter catches the flu: detecting influenza epidemics using Twitter. In: Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing; July 27-31, 2011; Edinburgh, United Kingdom. http://www.aclweb.org/anthology/D11-1145. Accessed January 6, 2016. 9. Orme J. Samaritans pulls “suicide watch” Radar app over privacy concerns. The Guardian. November 7, 2014. http://www.theguardian.com/society/2014/nov/07/samaritans-radar-app-suicide-watch-privacy-twitter-users. Accessed September 7, 2015. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png JAMA American Medical Association

Machine Learning and the Profession of Medicine

JAMA , Volume 315 (6) – Feb 9, 2016

Loading next page...
 
/lp/american-medical-association/machine-learning-and-the-profession-of-medicine-6TsLpT10HL
Publisher
American Medical Association
Copyright
Copyright © 2016 American Medical Association. All Rights Reserved.
ISSN
0098-7484
eISSN
1538-3598
DOI
10.1001/jama.2015.18421
Publisher site
See Article on Publisher Site

Abstract

Must a physician be human? A new computer, “Ellie,” developed at the Institute for Creative Technologies, asks questions as a clinician might, such as “How easy is it for you to get a good night’s sleep?” Ellie then analyzes the patient’s verbal responses, facial expressions, and vocal intonations, possibly detecting signs of posttraumatic stress disorder, depression, or other medical conditions. In a randomized study, 239 probands were told that Ellie was “controlled by a human” or “a computer program.” Those believing the latter revealed more personal material to Ellie, based on blind ratings and self-reports.1 In China, millions of people turn to Microsoft’s chatbot, “Xiaoice,”2 when they need a “sympathetic ear,” despite knowing that Xiaoice is not human. Xiaoice develops a specially attuned personality and sense of humor by methodically mining the Internet for real text conversations. Xiaoice also learns about users from their reactions over time and becomes sensitive to their emotions, modifying responses accordingly, all without human instruction. Ellie and Xiaoice are the result of machine learning technology. The profession of medicine has a tremendous opportunity and an obligation to oversee the application of this technology to patient care. To do so, medicine, which is only now becoming comfortable with the electronic medical record, needs to continue with the digital revolution. Business has already embraced this degree of innovation, leveraging the momentum in the technology sector that is driven by the confluence of 3 key factors: first, massive amounts of available data; second, advances in computational processing, including developments in distributed computing; and third, breakthroughs in machine learning, in which computational algorithms learn from raw data, without human instruction. Machine learning algorithms personalize search engines, keep spam out of email inboxes, and steer self-driving cars. Computational capacity and machine learning allow massive amounts of data to be analyzed rapidly, laying the groundwork for personalized medicine mediated by technology. Previously unimaginable opportunities to apply machine learning to the care of individual patients in medical practice are currently within grasp. Machine learning algorithms can accommodate different configurations of raw data, assign context weighting, and calculate the predictive power of every combination of variables available to assess diagnostic and prognostic elements. The applications of machine learning in medicine have been groundbreaking, especially in imaging. Image recognition algorithms have discovered new tumor features relevant to prognosis, contributing to new knowledge in pathology.3 These algorithms also can handle risk profiles that are highly individualized, allowing analysis of disorders with multiple etiologies and of incomplete data, as is typical in real clinical settings. Using decision trees and nested analytic structures, clinicians can then extract the minimum data necessary to make a diagnosis or therapeutic recommendations. For example, a feature selection algorithm reduced the number of items (from 29 to 8) necessary for diagnosing autism spectrum disorder (ASD) with 100% accuracy among 612 patients with ASD. The classifier was then tested with samples of 110 and 336 patients and identified all but 2 individuals with ASD, representing 99.7% sensitivity and 94% specificity.4 Potentially, computer-assisted assessment in ASD will reduce the time needed to make an accurate diagnosis and improve patient outcomes, as has been achieved in the care of patients with cancer.5 In the past, much of medicine relied on population-wide diagnostic models that performed well, on average. Presently, the goal is the generation of precise diagnoses and treatment strategies using highly individualized data in the care of a specific patient. Global adoption of mobile and wearable technology has added yet another dimension to machine learning, allowing the uploading of large amounts of personal data into learning algorithms. Now, within closed-loop feedback systems, mobile technology (eg, a smartphone) is not just a biometric device (eg, measuring blood glucose levels) but ultimately could become a platform from which to deliver tailored interventions based on algorithms that continually optimize for personal information in real time. Available for many years, implantable cardioverter-defibrillators have saved lives by using algorithms to detect ventricular fibrillation and immediately deliver a defibrillating shock to the heart. Now, wearable devices promise to improve diabetes care—a small glucose meter adherent to the upper arm can regularly sample glucose levels, which are then wirelessly fed to the patient’s smartphone to inform the patient and treating physician. A randomized controlled trial enrolling 247 patients demonstrated that a glucose sensor in a closed loop with an insulin pump reduced nocturnal hypoglycemia (area under the curve reduced by 37.5% in the sensor group compared with the control group, P < .001) by “suspending” insulin pumping below a glucose threshold.6 Mobile technology also can track everyday language, and natural language processing and sentiment analysis can then extract patterns and meaning. For instance, a cell phone passively listening to a person’s voice might recognize speech patterns associated with “thought disorder,” thus identifying the beginning of cognitive impairment or a psychotic episode. A machine learning algorithm detected abnormal speech features in free-text samples, quarterly for 2.5 years, from 34 adolescents at high risk for psychosis. With 100% accuracy, the algorithm identified 5 individuals who later developed psychosis, surpassing clinical ratings.7 In depression, negative cognitions (eg, “things are hopeless”) might be identified. A smartphone might possibly forewarn of a major depressive episode by spotting an increased frequency of these distortions in a person’s language, correlating with decreased texting to friends and staying at home based on geolocation. Natural language processing may benefit not only the care of individuals but also populations. An algorithm using data mined from more than 5 million Twitter posts predicted a flu outbreak with an accuracy of 0.89 and outperformed state-of-the-art methods such as analysis of flu-related web searches, especially at early stages, because it successfully filtered out messages posted by individuals who did not actually have the flu.8 The capacity to collect vast personal data and to integrate those data into individualized predictive models has the potential to improve precise and early disease detection and invite the salutary health outcomes that accompany appropriate intervention. Yet innovation of this nature comes with risk. For example, Samaritans Radar, a text-based app released by a British suicide prevention organization, processed posts on Twitter, used machine learning to detect phrases predictive of suicidal behavior, and sent messages to followers of the posting individual so they might obtain help. The app was removed from the market just 1 week after its release amid criticism that it had the potential to place fragile individuals at greater risk from cyberbullies.9 The risks connected with granular tracking and recording of personal data gathered in the context of everyday life may turn out to involve less protection of privacy and greater potential for exploitation of human vulnerability. The potential for misuse of human health information is disquieting. Consider that user agreements for some apps have had clauses that allow a person’s smartphone to passively record conversations without permission. As machine learning enters state-of-the-art clinical practice, medicine thus has the immense obligation to ensure that this technology is harnessed for societal and individual good, fulfilling the ethical basis of the profession. The world has entered a period of unprecedented innovation, bringing a wealth of possibilities to clinical medicine. With this extraordinary opportunity, however, comes the obligation of the medical profession to serve the human good. Ethical design thinking is essential at every stage of development and application of machine learning in advancing health. Toward this aim, physicians with integrity and sophistication should partner closely with computer and data scientists to reimagine clinical medicine and to anticipate its ethical implications. It is important to systematically validate data from mobile health and consumer-facing technologies, particularly for cases in which dynamic intervention is provided. Physicians should initiate collaboration and be judicious in their approach to industry relationships so that technology may progress in a manner that upholds the social trust in medicine. In tandem with innovation, academic physicians must create opportunities for rigorous empirical validation of new technologies against clinical outcomes.Certainly, leaders throughout medicine must endeavor to create novel ethical approaches sufficient to address emerging dilemmas inherent to use of machine learning technologies in furthering health. Back to top Article Information Corresponding Author: Alison M. Darcy, PhD, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, 401 Quarry Rd, Stanford, CA 94305-5719 (adarcy@stanford.edu). Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Roberts reported that she is owner of Terra Nova Learning Systems. No other authors reported disclosures. References 1. Lucas GM, Gratch J, King A, Morency L-P. It’s only a computer: virtual humans increase willingness to disclose . Comput Human Behav. 2014;37:94-100.Google ScholarCrossref 2. Markoff J, Mozur P. For sympathetic ear, more Chinese turn to smartphone program. New York Times. July 31, 2015. http://www.nytimes.com/2015/08/04/science/for-sympathetic-ear-more-chinese-turn-to-smartphone-program.html. Accessed September 7, 2015. 3. Ye Q-H, Qin L-X, Forgues M, et al. Predicting hepatitis B virus-positive metastatic hepatocellular carcinomas using gene expression profiling and supervised machine learning . Nat Med. 2003;9(4):416-423.PubMedGoogle ScholarCrossref 4. Wall DP, Kosmicki J, Deluca TF, Harstad E, Fusaro VA. Use of machine learning to shorten observation-based screening and diagnosis of autism . Transl Psychiatry. 2012;2(4):e100.PubMedGoogle ScholarCrossref 5. Lisboa PJ, Taktak AFG. The use of artificial neural networks in decision support in cancer: a systematic review . Neural Netw. 2006;19(4):408-415.PubMedGoogle ScholarCrossref 6. Bergenstal RM, Klonoff DC, Garg SK, et al; ASPIRE In-Home Study Group. Threshold-based insulin-pump interruption for reduction of hypoglycemia . N Engl J Med. 2013;369(3):224-232.PubMedGoogle ScholarCrossref 7. Bedi G, Carrillo F, Cecchi GA, et al. Automated analysis of free speech predicts psychosis onset in high-risk youths . NPJ Schizophr. 2015. doi:10.1038/npjschz.2015.30.Google Scholar 8. Aramaki E, Maskawa S, Morita M. Twitter catches the flu: detecting influenza epidemics using Twitter. In: Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing; July 27-31, 2011; Edinburgh, United Kingdom. http://www.aclweb.org/anthology/D11-1145. Accessed January 6, 2016. 9. Orme J. Samaritans pulls “suicide watch” Radar app over privacy concerns. The Guardian. November 7, 2014. http://www.theguardian.com/society/2014/nov/07/samaritans-radar-app-suicide-watch-privacy-twitter-users. Accessed September 7, 2015.

Journal

JAMAAmerican Medical Association

Published: Feb 9, 2016

Keywords: artificial intelligence,medical informatics,medical informatics applications,technology

References