Computers have changed everything. We all know it. Most of the things that computers have changed have been changed for the better. Many of the fears that we had with the advent of computers have not come about (…yet). In the preface of one of the versions of the Divine Comedy by Dante Alighieri, the comment is made that in Dante’s age one could know everything that was known in the world. Whether this was empirically true of Dante’s age is hard to say, however the point is well taken. That is to say that collecting information about the world was the rate limiting step and a very useful attribute for an individual to have.
Fast forward to the modern age, there is far too much information for one person to know. Even within a single field, say biology or medicine, there is far too much for one person to take in. It is estimated that 1.5 million academic papers were published last year alone. To merely keep up with new knowledge is an insurmountable task, not even counting the enormous amount of information that already exists. Teachers or professors cannot principally teach facts. This is not a tradition of passing on an oral history from one generation to another. The vast number of facts on the internet make it so that professors who pass on facts will soon be obsolete. The job of the professor and indeed the job for all of us, is to learn how to synthesize information. This is taking in large amounts of data and reaching a conclusion from which actionable steps can be taken. Computers are currently stunningly bad at doing this type of thing. Creativity, that is developing an objective that is not predetermined, operating in a system where the rules are fungible, and recognizing opportunities between disciplines and between tasks, are all things with which increased computational power has not endowed computers. These are, for the near future at least, tasks for which humans are uniquely suited.
Although there is much discussion (ahem…debate) about where artificial intelligence will take us, there is consensus that artificial intelligence will be used, and is being used, to augment human abilities. Our role, both now and in the future, is not to accumulate information but synthesise information and to creatively solve problems and come up with new goals. It is, and will continue to be, a new way to do things, but one in which the benefits are enormous. You may remember from a previous post the fact that there are over 7,000 known diseases and many scientists estimate there could be double that number that actually exist. No human can keep that straight. That is a recipe for failure as a physician and a lot of missed diagnoses and opportunities in patients. Artificial intelligence augmenting physician knowledge and skills is the solution to this problem. The computational power and memory capabilities of artificial intelligence would keep the diseases, indications, complications, medications, and everything else straight. So why even need the physician at all? Precisely because of the weaknesses of computer based intelligence mentioned earlier. In medicine there are no set rules and no clearly defined goal. Each patient is different, has different baselines, and unique attributes. Further, there is no guarantee that there is only one solution or diagnosis to the puzzle. There may be (and often are) many different disease conditions and diagnoses all occurring at once. Obtaining information from the patient, tailoring their diagnosis and treatment to the individual, and ensuring that the patient as a whole is cared for will be the domain of the physician. Recalling large amounts of information, integrating the newest clinical trials, and identifying rare diseases will be the critical contribution of the machine.
But whether in medicine or any other field, the times have changed and so too has our role. Instead of collecting information we must synthesize information and use our creativity to solve practical problems.