The speaker did an excellent job of speaking on how artificial intelligence could be a potential game changer in more efficient healthcare in our future, especially in circumstances where healthcare providers struggle to create a treatment plan due to lack of a definitive diagnosis. He uses cancer related problems to discuss how AI can help better diagnose what area should be treated for chemotherapy by acquiring blood samples, diagnostic imaging, as well as other tests and uploading these components into a system that would then generate a proper diagnosis, treatment, and management plan. While I agree that technological advances have drastically changed the way we are able to function in the healthcare field as well as the advanced ways in which we are able to provide better healthcare to those in need, I think it is important to state that AI should only be used as an adjunct and never a replacement. Navid Saidy explains the limitations of using AI including that at the current state, depending on the representative pool of patient’s, there is a high chance of patient bias depending on the data set provided. I believe the biggest concern with some hospitals having AI at the forefront while others do not is the issue of justice. Justice in the context of ethical healthcare is the principle which forces us to look at if something is fair and balanced when it comes to the patient. If we are to look at an individual patient, I believe justice is taken care of. However, would AI at certain locations mean the stratification process of goods and services provided by certain hospitals would change even further in more affluent areas. There is already a clear issue of resources and quality of care depending on whether one is at an inter-city hospital versus a private owned corporation. Of course, these issues will be here whether there is AI or not, but is it just to further create a larger gap between the quality of healthcare provided? Will AI cause those who are in dire need but uninsured to suffer even more? Will AI cause a larger monopoly on the healthcare world and make quality of care become an even more “elite” privilege rather than basic human, right? These are thoughts that came to my mind, and I would love to hear any thoughts? I do agree that we should also look at how much good a system like this will bring before looking at the bad, but in today’s post-pandemic world, it is hard not to wonder how things could be negatively impacted, if at all.
I appreciate the attempt at simplifying a complex field of computing science. What was missing is commentary on the ethics behind the use of emergent technologies such as AI. This matters because training an AI system requires both negative and positive outcomes to be fed back into the system. The subject of care must provide their informed consent AND understand the experimental nature of the technology. The regulatory frameworks in place, exist to help ensure that negative outcomes are reduced (based on experience) AND that there is a responsible person for the decision making that led to the outcome. Who is that responsible person with an AI system? The clinician who blindly trusts the AI, or the engineer who coded the algorithms? As a health care professional and digital health leader I suggest that until those aspects are established from a medico-legal, liability, education and socio-technical point of view for clinicians - we should be rigorously challenging the message of simplicity put forward here SO that AI can be a trustworthy tool.
I strongly agree with your statement that the ethical implications of artificial intelligence in healthcare needs to be fully explored before it becomes routinely utilized. With the ethical principle of beneficence in mind, the use of artificial intelligence in healthcare and the diagnosis process indicates great potential to be a useful tool to clinicians and make faster and more accurate diagnoses which could benefit patients and improve health outcomes. However, a major ethical dilemma to consider regarding the use of artificial intelligence is the topic of privacy. In order to be more useful, the artificial intelligence systems need large amounts of data from patients. It is therefore crucial that the use of this data does not impose on the privacy of the patient's identity and health records. Another important consideration is informed-consent. Informed-consent implies that the patient fully understands what they are agreeing to. If artificial intelligence were to be used in the clinical setting, it would be imperative that the clinician explains how artificial intelligence operates, and how it uses data to generate its responses. While artificial intelligence would be used to support clinicians and act as another tool in the clinical process, human judgment is still completely necessary and artificial intelligence is not meant to replace clinicians or tell patients what to do. Finally, the potential for bias to be implicated into the artificial intelligence systems is possible due to historical data and therefore must be eradicated so as to not perpetuate those biases. Overall, the idea for artificial intelligence to be used in the healthcare field holds great potential to become a great tool to clinicians and thus help patients but it is vital to uphold ethical standards.
That is very true; the ethical considerations are an important thought here. We have recently written a blog regarding ethical considerations but this is with regards to AI in healthcare research particularly.
Thank you sir for your insight, as a digital health leader can you comment about your current views on the use of AI in healthcare in general and its role in healthcare administrative decision making in particular , waiting for your response
I'm writing a persuasive on the benefits of using AI in healthcare. This was very helpful in looking at the benefits and idea of how helpful using an AI can be in healthcare. It's not entirely offsetting the role of decision making from physicians it's a great tool.
Navid discussed how Artificial Intelligence (AI) can improve healthcare and I completely agree. He pointed out all the benefits of AI in healthcare such as the ability to collect more data quicker, does not have unconscious biased, and more accurate diagnoses all of which, would help save lives. I assumed data would be entered into a software from all over the world and able to be accessed within seconds. That is faster and more thorough than any physician or laboratory communication I’ve ever witness for diagnosing patients. One broad example would be COVID; could it have been potentially isolated sooner if we would have known the severity and consequences of the illness when it first began? The downfall of this information being at everyone’s fingertips is when the treatment is unavailable in one country but not another. I imagine that would create an emotional challenge for the physician and patient if there is a known successful treatment yet unable to get access to it however, I feel that could eventually be resolved. Another concern would be the many exceptions we see in medicine. The AI may only output the most common symptoms or treatments which would cover most cases but the few numbers of “abnormal” cases may not benefit from an AI program. It would be nearly perfect if the AI could give an “I don’t know” answer as Navid suggested. Unlike human physicians, AI will have no ability to create unconscious biased ever. Each patient case will input of their illness or disease, the AI will compare the individual’s information to a large data set and that’s it! It cannot take into consideration the “type” of patient such as a helplessness patient or difficult patient. It cannot see past experiences of patients such as substance abuse. Both of which, I think, can influence a human physician’s opinions and potentially patient treatment. That being said, sometimes knowing the patient’s personality does positively influence the type of treatment that would be best because a compliant patient is better than patient that refuses treatment. Our world is getting large and populated at a fast rate, and no human can keep up. Although there is no way AI can take over completely. Humans have higher order thoughts and emotions that an AI is far from containing. As long as the AI can continuously gather data from around the world, adapt to the collected data, and there is still human oversight, I think AI would be a huge benefit to medicine.
Dr. Saidy made an interesting case of potentially using AI for the future of healthcare. He talks about the best of AI and some regulations that would need to be used. There are some ethical components that came to mind with this AI use. AI algorithms have a risk of perpetuated biases of certain populations if the data used to train them isn’t representative. I loved how Dr. Saidy was able to address this and how they are designing it to be fair and unbiased which is amazing progress. Another thing I was wondering was the transparency of the AI systems. It seems AI systems can be opaque, making it difficult for patients and healthcare providers to understand how the decisions are made. It is important to ensure that the AI systems are explainable so that patients and healthcare providers can understand how decision are made. I feel it is also important to ensure that there are clear lines of responsibility for the decision made by AI systems and there some mechanisms in place to address errors and/or mistakes. Overall, the healthcare system can benefit greatly from AI systems and being able to advance technology to help patients will be a great accomplishment. These are just some thoughts I had of things that need to be considered as we advance forward into artificial intelligence in healthcare.
I am 3rd year MBBS student. I love computers & technology more than medical. And my vision is to work in AI in healthcare, IoMT, and in other fields of technological innovation in health care. I know basics of python, basics of C language, worked with aurdino and Raspberry Pi. And general knowledge of AI and ML. Im really passionate and have potential but not clear about the way I should take. Plz help!
I am 1 St year mbbs student I think I have also thought process like you I am learning python Bcz keen interest in AI in healthcare can you guide me as far as your knowledge about field
@@robustfactz4582 I learned by 3 sources my brother who is an AI engineer so i know the road map and have relevant contacts though my brother doesn't help me that much as he is busy in his startup. The other is through talking to people online on forums like FB, quora, reddit and taking webinars lots and lots of webinars and one time also talking one to one in a virtual conference with 4 people in medtech. One was the founder of evocal health. The great thing about webinars is I connected with multiple people on linked in through webinars. Then again there will be many local in person opportunities too. Thirdly but most importantly work on your basics of python. If your basics are'nt great you will have a problem ahead. I have spent 2 hours just to find out i didn't add brackets in my code in several places.
As we shift toward a “personalized medicine” the use of AI in healthcare is inevitable. I really appreciate Dr. Navis’s comments on data bias and as a medical student I wanted to know more. I am already aware of the biases found in current medicine but had not even considered the idea that our basic medical algorithms were bias. There is a great article written by Katherine J. Igoe explaining the biases seen in medical algorithms. In this article she explains that currently our genetic and genomic data is represented by 80% of Caucasians, and thus makes our understanding of genetics geared towards Caucasians. Obviously, we cannot just ignore race when conducting genetic information and in her article, she suggests the best solution for combating the inevitable use of AI is having a diverse group of professionals and not strictly a team of data scientist. This includes have a diverse professional team consisting of physicians, data scientist, government, philosophers, and everyday civilians.
The inclusion of philosophers and regular folk is an interesting idea, and one that is surely against what big companies think. The power of civilian science is numerous and far greater than any gate-kept, “professional” attempt at solving a problem. I’m curious as to your reason for including philosophers on the team?
Dr. Saidy’s talk regarding artificial intelligence(AI) in healthcare made the argument that AI can provide many benefits including making hospitals more efficient and can improve access to care by providing accurate decision-making tools. Interestingly, AI can factor in the outcomes of thousands of other patients to determine what will work best for the patient based on their individual circumstances by comparing to other patient outcomes with similar circumstances. This could provide insight into how physicians determine what treatment or procedure may be best for their patients given their patient’s specific circumstances. However, I argue that no two people and their specific circumstances are going to be identical and guarantee an identical outcome. AI could be used to make recommendations but there could be circumstances in that AI fails to factor into its algorithm even if AI continues to evolve over time and get better at its predictions for healthcare outcomes. An ethical consideration when considering implementing AI into healthcare is that AI systems can have a significant impact on patient autonomy and decision-making. This would impact patient autonomy if AI systems are used to make decisions about diagnosis, treatment, or clinical outcomes without human input. I think it’s important that AI systems are designed and implemented in a way that respects patient autonomy and preferences so for example, the patient still gets to decide what treatment would work best when presented with all the treatment options and the risks and benefits for each. Also, if the AI algorithm is not up to date or if there is an issue with the learning process of the AI system, it could lead to the patient receiving an incorrect diagnosis or an incorrect treatment which would not lead to improved healthcare outcomes. These unintended consequences or errors due to relying on the AI system to guide diagnosis and treatment can put patient safety at risk and could cause harm to the patient as a result. This relates to the ethical principle of nonmaleficence which requires that healthcare providers do no harm to their patients. In order to comply with the principle of nonmaleficence, AI systems need to be designed and implemented in a way that minimizes the risk of harm to patients, and any potential harm is carefully considered and weighed against the potential benefits of using the AI system. Dr. Saidy brought up a valid point that Ai systems often don’t use data sets that represent people of all different races. Therefore, when AI predicts an outcome for someone of Asian race from a predominately white male data set, the prediction made by the AI system is likely to be less accurate. AI systems therefore can inadvertently perpetuate biases and discrimination against people if they are trained on biased or incomplete data. Overall, there are benefits to implementing AI and there are also risks and challenges that need to be further investigated before AI is allowed to fully predict and guide outcomes.
While implementation and use of a new technology can be scary, the potential for greater streamlined success is far more vast than any hesitation or fear of using Ai. With a strong and unbiased foundation, along with flexible and stringent monitoring, we could enter a new era of healthcare.
Dr. Saidy did an excellent job of showing the huge impact artificial intelligence can have on healthcare. The ability to examine data collected from thousands of patients across the world would be an invaluable tool for physicians and has the potential to vastly improve care. The barriers Dr. Saidly described that are preventing AI from becoming a successfully used tool in healthcare raise an important question. Is it ethical to resist the adoption of a tool that has such huge potential to improve care and save countless lives? By not moving quickly to change regulations and implement this game-changing new technology, are we basically killing people? Of course it takes time to change regulations and set up the right controls before we can fully start using AI. But if we as physicians and healthcare providers know something will save our patients’ lives and we don’t do everything we can to use it, I think we’re violating our ethical obligation to our patients. We should do all we can to make these tools available as soon as possible. And despite all its benefits, I think we should be careful not to use AI as a crutch that replaces critical thinking and personalized care.
Artificial intelligence has some amazing potential benefits in the health care field, with potential efficiency improvements for hospitals, assisting and guiding physicians in patient treatment regimens, as well as the greatest potential of diagnosing a patient. Dr. Navid Saidy discussed some very important complications to consider related to introducing artificial intelligence into the medical practice. Including regulations for medical devices that typically involves a physical device. Yet in the case of artificial intelligence it is a software that evolves and does not involve a static repetitive outcome. Nevertheless if artificial intelligence purpose is to diagnose, give treatment options and prognosis, it’s output has numerous outcomes which can be hard to quantify and therefore hard to regulate. Even as stated in the video the regulations change to allow for more transparency and real time monitoring, there still are risks. One of the main concerns about artificial intelligence is that the data used to create its program is biased. Since humans are the ones collecting the data, and have interpreted the data with some implicit assumptions that are then incorporated in to the system, this bias is then transferred to the artificial intelligence models and can lead to biased resorts in diagnosis/treatment. This is why it is so vital that when assessing these new technologies that the results are accurate. This leads me to a important topic to consider with the implementation of artificial intelligence; that of beneficence. Beneficence is the act of doing good by benefiting the patient more than doing harm. Artificial intelligence has a great potential capacity to reach a current diagnosis with more efficiency which could greatly benefit patients in time sensitive care. However, in the cases where the wrong diagnosis is stated, with confidence by artificial intelligence this could lead to greater harm to the patient. The capacity for AI to state when the answer is unknown and if more testing is needed, is crucial to the application of this technology. These drawbacks need to be critically supervised as artificial intelligence is incorporated into medicine. It is naïve to say that artificial intelligence won’t be a part of medicine in the future. All the same, we need to be careful and diligent in assessing the technology and outcomes for patients. It is important to remember, that part of healing comes from a healing touch and emotional and spiritual connectivity of humans. As technologies become more and more integrated in our society, we must prioritize and preserve our humanity.
@comment sense have you heard of the term differential diagnosis ? You are from the same breed who were against the usage of machines in industry ,or automotive s for travel So say whatever you want pal You can't stop the future from happening . 👍 Hope u keep on spamming therafter and proving yourself older
This is very interesting and I am excited to see what this has to offer. There are so many pros to this type of technology and as was mentioned, there are a lot of cons as well. It is so important that this stays highly regulated. One of the biggest issues that I could see arising are rising out of this situation is the fact that AI technology is so new. There are new problems found within the technology all the time and we are discovering new things about it every single day. The reason this is an issue is because of the consequences. When chat GPT makes a mistake It generally does not mean the life the life or death of a human being. Whereas with this technology It can very easily turn bad quickly I feel as though there needs to be more time spent in the and the world of AI before we jump to using this in A real life setting. As an example I think it would be good to use this alongside a doctor for a minimum of five years. See exactly what the doctor recommends and then compare that to what the artificial intelligence was recommending. The success rate needs to be almost perfect and these types of scenarios. Another issue that I see is in liability. If the artificial intelligence recommends certain treatments or diagnoses a patient, who is going to be liable when things go south? Is it going to be the doctor in charge because he should have known better than what the A I was saying Or is it going to be the company that generated and created the AI? Both would have strong arguments as to why it should be the other end and I feel as though this could leave the patient in a position where they cannot receive the compensation or seek justice as needed. Lastly, artificial intelligence is created by a company. For-profit companies are created to do just that, make a profit. If there are companies that are competing to have their artificial intelligence working in certain hospitals, who is to say that there will not be shortcuts taken or poor leadership that leads to disasters within the company that leads to disasters within the healthcare system. I feel as though a lot of these points that I brought up are very critical to think through before this type of technology becomes the norm. I’m sure this has been discussed many times with others but for the future of Healthcare I do hope that it is in the right hands. While a lot of things I said were geared towards the negative, I really do hope that we can see this technology working flawlessly in the future as I think it has great potential to do amazing things.
The depiction of AI in popular culture has often been one of dystopian futures, where machines rise against humanity. However, as the speaker rightly points out, the reality is far from this portrayal. AI has the potential to revolutionize healthcare, offering personalized care, streamlining hospital operations, and providing accurate decision-making tools. The example of AI's role in cancer diagnosis and treatment is particularly poignant. By consolidating data from various sources, AI can provide accurate predictions about a patient's diagnosis, treatment options, and prognosis. This is a game-changer, especially for patients like Peter, who, without AI's intervention, might have faced a grim prognosis. However, the journey of integrating AI into healthcare is not without its challenges. One of the most significant hurdles is the existing regulatory framework, which is not designed to accommodate the dynamic nature of AI. Traditional software is static, producing the same output for the same data. In contrast, AI has the intrinsic ability to learn and evolve, making it more adaptable and, ideally, more intelligent over time. Locking the learning potential of AI models, as the current regulatory approach suggests, limits their potential and can even be detrimental to patient care. Furthermore, the issue of data bias is critical. If AI models are trained predominantly on data from one demographic, their accuracy and reliability can diminish for other demographics. It's essential for AI developers to ensure their models are trained on diverse datasets. However, as the speaker mentions, this isn't always feasible due to the availability of data. Therefore, building a functionality where AI models can acknowledge their limitations and uncertainties is crucial. In conclusion, the potential of AI in healthcare is immense. However, to harness this potential fully, we need to address the challenges head-on. This involves establishing new regulatory frameworks in collaboration with AI developers, healthcare practitioners, policy advisers, and patients. By doing so, we can ensure that AI serves the entire population equally, leading to a future where healthcare is more personalized, efficient, and effective.
While AI has the potential to revolutionize healthcare, there are also potential negative consequences to its use. One major concern with AI is that it may perpetuate or even amplify existing biases and discrimination in healthcare. This is because AI is only as unbiased as the data it is trained on and the way it is programmed. Since humans who create these programs have their own implicit biases, the AI will replicate it and maybe even amplify it with its efficiency. Another concern is that healthcare professionals and patients often are not educated enough to understand complex algorithms used by the AI and how it arrives at its decisions. This lack of transparency can decrease trust in the technology. Furthermore, large amounts of data is required in order to develop a reliable AI systems, much of which is sensitive medical information. This raises concerns about privacy and security risks, particularly if the breaches or unauthorized access to the data is obtained. While AI has enormous potential to improve healthcare, it is important to be aware of the potential negative consequences of its use.
What happened to ask the patients if they want this ??? They should decide…. Not the doctors not hospitals, not insurance companies or the government!!! Patient make the decision!
I do believe that the actual stretch to which AI can help the healthcare system may be taken too far. I can understand the importance of using AI to consolidate data, having large amounts of information ready to go when needing to refer to treatment options and such. I can see how this saves time and resources and saves us from error at times. I can also understand the importance of wanting to be as efficient as possible with many situations in medicine. But how well does AI understand the risks and benefits of each patient? How well does AI truly follow beneficence for each individual patient? AI can’t necessarily understand the emotional or mental toll certain treatments can have on a patient outside of typically stated adverse reactions. A major problem with this is when the patient does not follow the standard of care, when the patient does not respond the way many others have to treatments, procedures, medicine etc. Dr. Saidy states that AI can even learn from these patients who did not follow the treatment and can help come up with following steps. But this is all still algorithm backed up by some data. Do we know if that data is recent? Do we know if it follows a trend and is generalizable to other places? Do we know who took this data? This can all be questions we as healthcare professionals need to think about. Think about a medication change - we could easily train a robot to know DDIs and which medications can be mixed with one another but what happens when a patient has an allergic reaction to a new medication and needs to replace it with something else? Further yet what if that medication used to replace the one that caused an allergic reaction would require two medication changes if a new medicine were to not work will all their existing meds? Here is where we may end up spending more money or time than we thought we saved with AI. And we could have solved the allergy and or reactions faster if a human doctor was around to supervise, or think to grab a LFT’s or genetic screening for patients with different metabolizing abilities. When we have to pick up the pieces AI left because of the critical thinking, we are taking two steps back. We have to preserve beneficence, and all the though process and considerations that surrounds what is doing best for the patient. I will say however, there are great ways to use AI, and there should be more information on specific uses such as using it t for locating the primary site of cancer. I think there is a balance between allowing AI to take over an entire patient vs allowing AI to aid us in information we cannot see or feel with the human site or touch. But when we consider places such as an ER that decisions need to be made quickly, is there a potential for doctors to rely on this information too much since they need to work quickly on their feet? Lastly, Dr. Saidy is aware of data bias and how that could skew the information depending on a patient’s information. I believe if we want to do what is best for the patient however, these tools to ensure bias does not occur are extremely important, and manufacturers should consider perfecting these tools prior to using AI on patients and potentially having the AI misdiagnose. In the case of misdiagnosis in particular, AI could potentially lead to breaking the code of non-maleficence. If a patient was misdiagnosed, chances are their treatment is incorrect for their diagnosis. In which case, we could be causing harm to the patient without knowing it. This is where again, AI needs to be used as a backup tool not the lead tool.
Navid talks about the stability of artificial intelligence and the potential to improve care for patients. While I agree that AI can be a game changer, it could improve diagnosing and care in a lot of ways. AI will be consistent, it won’t miss things that a human will because AI doesn’t have a bad day, they aren’t affected by a patient load, they aren’t worried about 40 patients at the same time. I don’t believe that AI will drastically improve healthcare, however, I believe that it could be damaging. When discussing AI there is always one thing that is left out, the human touch. Doctors care about their patients, they dedicate their lives to learning exactly how to help them, and if they don’t, they’ve learned how to learn so that they can help them. While AI does learn and grow, they don’t have a personal connection, desire for their well-being, and an emotional connection with anyone. This is what drives physicians; nobody goes into medicine for the money or for the job itself. Yes, the money can be good, but $400k of debt to pay off to become a doctor eats up so much of it. Most doctors don’t have a typical nine to five job, they don’t go home until all the patients have been seen, the charting has been done, and the staff has gone home. If an emergency comes up, they don’t get to go home until it’s taken care of. So, why do doctors go into medicine? To help people. Every doctor is there because they genuinely care about the person they are seeing; this isn’t something that AI can ever do. Care and passion can go a long way as well, when you are passionate about something, there’s nothing you won’t do to achieve what you are after, you won’t stop working towards it until you’ve accomplished what you set out to do. If a doctor can’t figure out what’s going on, he’s going to dedicate all the time he has to figure out what to do or what is going on. That’s why AI can never replace a healthcare worker. AI doesn’t know that a patient has a wife and kids, or grandkids they care for, or foster kids they have taken in, but a doctor does. Doctors live by the principle of beneficence, to do good, and that’s something AI doesn’t understand. Now, a doctor could utilize AI to help them come to a conclusion or find the answer to a question, there are ways to take advantage of technology while still taking advantage of human care.
I enjoy reading your comment, but what if A.I was trained to behave like a human, emotion's, consciousness. A.I in medicine has hope, it will drastically improve healthcare, a lot of work has to been done in order for A.I in medicine to be perfect, it might not be now but it's the future
Long overdue. Use AI to predict and medication dosage and meds (adhd e.g.) by past response. Whatsapp then the patient simply exports the log as a txt and a LLM does analysis and etc just use GPT4 and so on (dont even have to fine-tune but it might help) as long it lerns and can be reused or benefitial for future models
Just like anything in healthcare that has ever been developed and adapted, AI needs to be tested in the field with live subjects. There may be casualties and collateral damage in this approach, but that's been the playbook of medical innovation with every medication and device. Informed consent is the key and the adaptation of this technology needs to be driven by clinicians with routine feedback, not regulators. Therefore, the full adaptation of AI in healthcare will take decades and will be fraught with setbacks and perverse incentives. The road towards an AI regulated health care model is going to be very long. People don't care if AI writes a poem for them, but diagnosis and management - there's going to be trust issues and may require generational turnover for human acceptance. However, in a perfect world where nothing goes wrong - AI would be amazing for healthcare as far as accuracy and efficiency.
I agree the road will be long and expensive for Ai developed by biased and selfish individuals. It will not be with developers who make a commitment to eliminating bias in every aspect that they can, leading to a much more useful and functional Ai system. If you devote all the time to a solid set of systems, the answer/path/solution will come to fruition much faster.
what about some rare instances where patients were told they were gonna die soon but didnt bcz of their willpower etc? does AI include - a patients will and hopes to live? their zeal for life...
AI in medicine has the potential to bring about significant benefits in terms of improved patient outcomes, more efficient diagnoses, and reduced healthcare costs. However, there is also a risk of harm if AI is not used ethically and with caution. One significant ethical concern is the potential for maleficence, or harm caused by the misuse or unintended consequences of AI. For example, if an AI system is not properly trained or validated, it could make incorrect or biased decisions that harm patients. Additionally, if AI is relied upon too heavily, it could lead to dehumanization of healthcare, with patients reduced to mere data points and algorithms. It is therefore essential that those developing and implementing AI in medicine prioritize ethical considerations and take steps to ensure that the technology is used safely and responsibly. The potential benefits of AI in medicine are vast, but we must also be mindful of the potential risks and take steps to mitigate them.
There is no doubt advances in technology have improved healthcare tremendously over the years and AI is no different. AI has already been shown to improve healthcare by better patient outcomes, personalized medicine and better access through its many tools. AI can aid healthcare providers in making highly informed decisions about patients diagnosis and treatment options. In the example, cancer is complicated and different for each patient and each specific type of cancer. AI can use data from the patient and other similar patients to streamline resources and give the best possible predictions. The dark side to this and many other technologies is where is the line in the sand? What are the rules and boundaries of this new technology? How do we prevent it from being used to harm patients instead of its intended good? Who or what governing body is going to decide what is okay and what is not okay? Can the AI develop biases over time which would negatively impact care? Who is legally and clinically responsible for healthcare errors when it comes to misdiagnosis, subpar treatment or even death? I think AI shows a lot of promise as a new tool to be used by people of today but I think there needs to be an organization in healthcare that objectively as possible assess the pros and cons, boundaries and limitations and how it is mot appropriately used in this setting. I think through this lens and organization then AI can be a great tool for physicians and other healthcare workers to do good by their patients- to provide creative problem solving to their unique clinical situation.
The only jobs to survive AI revolution will be those that have very small data pools available or require massive creativity and novelty (more so than art and music)
Generally could be understood as framework rules. Rules that regulate, or in a different sense, rules that moderate and basically keeps everything in check and in control. Essentially it means framework of rules that regulate the operation of things. Hope it helped.
Yes it can be replicated quite easily. Now is it real? Probably not. Does it matter? Probably not. When I go to an hospital I want to go there and get out as quickly as possible. Most nurses I interacted with weren't that empathetic in their approach. More like neutrally doing their job.
There are a bunch of very longwinded tl;dr comments here, by channels with no avatar, content or "About" info, written in the same kind of tone. Wouldn't be surprised if it's bots or the same person., or AI.
The speaker did an excellent job of speaking on how artificial intelligence could be a potential game changer in more efficient healthcare in our future, especially in circumstances where healthcare providers struggle to create a treatment plan due to lack of a definitive diagnosis. He uses cancer related problems to discuss how AI can help better diagnose what area should be treated for chemotherapy by acquiring blood samples, diagnostic imaging, as well as other tests and uploading these components into a system that would then generate a proper diagnosis, treatment, and management plan. While I agree that technological advances have drastically changed the way we are able to function in the healthcare field as well as the advanced ways in which we are able to provide better healthcare to those in need, I think it is important to state that AI should only be used as an adjunct and never a replacement. Navid Saidy explains the limitations of using AI including that at the current state, depending on the representative pool of patient’s, there is a high chance of patient bias depending on the data set provided. I believe the biggest concern with some hospitals having AI at the forefront while others do not is the issue of justice. Justice in the context of ethical healthcare is the principle which forces us to look at if something is fair and balanced when it comes to the patient. If we are to look at an individual patient, I believe justice is taken care of. However, would AI at certain locations mean the stratification process of goods and services provided by certain hospitals would change even further in more affluent areas. There is already a clear issue of resources and quality of care depending on whether one is at an inter-city hospital versus a private owned corporation. Of course, these issues will be here whether there is AI or not, but is it just to further create a larger gap between the quality of healthcare provided? Will AI cause those who are in dire need but uninsured to suffer even more? Will AI cause a larger monopoly on the healthcare world and make quality of care become an even more “elite” privilege rather than basic human, right? These are thoughts that came to my mind, and I would love to hear any thoughts? I do agree that we should also look at how much good a system like this will bring before looking at the bad, but in today’s post-pandemic world, it is hard not to wonder how things could be negatively impacted, if at all.
I appreciate the attempt at simplifying a complex field of computing science. What was missing is commentary on the ethics behind the use of emergent technologies such as AI. This matters because training an AI system requires both negative and positive outcomes to be fed back into the system. The subject of care must provide their informed consent AND understand the experimental nature of the technology. The regulatory frameworks in place, exist to help ensure that negative outcomes are reduced (based on experience) AND that there is a responsible person for the decision making that led to the outcome. Who is that responsible person with an AI system? The clinician who blindly trusts the AI, or the engineer who coded the algorithms? As a health care professional and digital health leader I suggest that until those aspects are established from a medico-legal, liability, education and socio-technical point of view for clinicians - we should be rigorously challenging the message of simplicity put forward here SO that AI can be a trustworthy tool.
I strongly agree with your statement that the ethical implications of artificial intelligence in healthcare needs to be fully explored before it becomes routinely utilized. With the ethical principle of beneficence in mind, the use of artificial intelligence in healthcare and the diagnosis process indicates great potential to be a useful tool to clinicians and make faster and more accurate diagnoses which could benefit patients and improve health outcomes. However, a major ethical dilemma to consider regarding the use of artificial intelligence is the topic of privacy. In order to be more useful, the artificial intelligence systems need large amounts of data from patients. It is therefore crucial that the use of this data does not impose on the privacy of the patient's identity and health records. Another important consideration is informed-consent. Informed-consent implies that the patient fully understands what they are agreeing to. If artificial intelligence were to be used in the clinical setting, it would be imperative that the clinician explains how artificial intelligence operates, and how it uses data to generate its responses. While artificial intelligence would be used to support clinicians and act as another tool in the clinical process, human judgment is still completely necessary and artificial intelligence is not meant to replace clinicians or tell patients what to do. Finally, the potential for bias to be implicated into the artificial intelligence systems is possible due to historical data and therefore must be eradicated so as to not perpetuate those biases. Overall, the idea for artificial intelligence to be used in the healthcare field holds great potential to become a great tool to clinicians and thus help patients but it is vital to uphold ethical standards.
That is very true; the ethical considerations are an important thought here. We have recently written a blog regarding ethical considerations but this is with regards to AI in healthcare research particularly.
Thank you sir for your insight, as a digital health leader can you comment about your current views on the use of AI in healthcare in general and its role in healthcare administrative decision making in particular , waiting for your response
I'm writing a persuasive on the benefits of using AI in healthcare. This was very helpful in looking at the benefits and idea of how helpful using an AI can be in healthcare. It's not entirely offsetting the role of decision making from physicians it's a great tool.
Did you finish your essay? Is it available anywhere?
I World love to Read It
can you provide a link or cite your essay please
Navid discussed how Artificial Intelligence (AI) can improve healthcare and I completely agree. He pointed out all the benefits of AI in healthcare such as the ability to collect more data quicker, does not have unconscious biased, and more accurate diagnoses all of which, would help save lives. I assumed data would be entered into a software from all over the world and able to be accessed within seconds. That is faster and more thorough than any physician or laboratory communication I’ve ever witness for diagnosing patients. One broad example would be COVID; could it have been potentially isolated sooner if we would have known the severity and consequences of the illness when it first began? The downfall of this information being at everyone’s fingertips is when the treatment is unavailable in one country but not another. I imagine that would create an emotional challenge for the physician and patient if there is a known successful treatment yet unable to get access to it however, I feel that could eventually be resolved. Another concern would be the many exceptions we see in medicine. The AI may only output the most common symptoms or treatments which would cover most cases but the few numbers of “abnormal” cases may not benefit from an AI program. It would be nearly perfect if the AI could give an “I don’t know” answer as Navid suggested.
Unlike human physicians, AI will have no ability to create unconscious biased ever. Each patient case will input of their illness or disease, the AI will compare the individual’s information to a large data set and that’s it! It cannot take into consideration the “type” of patient such as a helplessness patient or difficult patient. It cannot see past experiences of patients such as substance abuse. Both of which, I think, can influence a human physician’s opinions and potentially patient treatment. That being said, sometimes knowing the patient’s personality does positively influence the type of treatment that would be best because a compliant patient is better than patient that refuses treatment.
Our world is getting large and populated at a fast rate, and no human can keep up. Although there is no way AI can take over completely. Humans have higher order thoughts and emotions that an AI is far from containing. As long as the AI can continuously gather data from around the world, adapt to the collected data, and there is still human oversight, I think AI would be a huge benefit to medicine.
Dr. Saidy made an interesting case of potentially using AI for the future of healthcare. He talks about the best of AI and some regulations that would need to be used. There are some ethical components that came to mind with this AI use. AI algorithms have a risk of perpetuated biases of certain populations if the data used to train them isn’t representative. I loved how Dr. Saidy was able to address this and how they are designing it to be fair and unbiased which is amazing progress. Another thing I was wondering was the transparency of the AI systems. It seems AI systems can be opaque, making it difficult for patients and healthcare providers to understand how the decisions are made. It is important to ensure that the AI systems are explainable so that patients and healthcare providers can understand how decision are made. I feel it is also important to ensure that there are clear lines of responsibility for the decision made by AI systems and there some mechanisms in place to address errors and/or mistakes. Overall, the healthcare system can benefit greatly from AI systems and being able to advance technology to help patients will be a great accomplishment. These are just some thoughts I had of things that need to be considered as we advance forward into artificial intelligence in healthcare.
Sounds motivational for future generations of scientists!
I am 3rd year MBBS student. I love computers & technology more than medical. And my vision is to work in AI in healthcare, IoMT, and in other fields of technological innovation in health care. I know basics of python, basics of C language, worked with aurdino and Raspberry Pi. And general knowledge of AI and ML. Im really passionate and have potential but not clear about the way I should take. Plz help!
I am 1 St year mbbs student I think I have also thought process like you
I am learning python Bcz keen interest in AI in healthcare can you guide me as far as your knowledge about field
@@robustfactz4582 I learned by 3 sources my brother who is an AI engineer so i know the road map and have relevant contacts though my brother doesn't help me that much as he is busy in his startup. The other is through talking to people online on forums like FB, quora, reddit and taking webinars lots and lots of webinars and one time also talking one to one in a virtual conference with 4 people in medtech. One was the founder of evocal health. The great thing about webinars is I connected with multiple people on linked in through webinars. Then again there will be many local in person opportunities too. Thirdly but most importantly work on your basics of python. If your basics are'nt great you will have a problem ahead. I have spent 2 hours just to find out i didn't add brackets in my code in several places.
Thank you brother
Sounds nice ,
One way is to keep across what’s happening in this space by subscribing to the Talking HealthTech podcast 😎🙏🏻
As we shift toward a “personalized medicine” the use of AI in healthcare is inevitable. I really appreciate Dr. Navis’s comments on data bias and as a medical student I wanted to know more. I am already aware of the biases found in current medicine but had not even considered the idea that our basic medical algorithms were bias. There is a great article written by Katherine J. Igoe explaining the biases seen in medical algorithms. In this article she explains that currently our genetic and genomic data is represented by 80% of Caucasians, and thus makes our understanding of genetics geared towards Caucasians. Obviously, we cannot just ignore race when conducting genetic information and in her article, she suggests the best solution for combating the inevitable use of AI is having a diverse group of professionals and not strictly a team of data scientist. This includes have a diverse professional team consisting of physicians, data scientist, government, philosophers, and everyday civilians.
The inclusion of philosophers and regular folk is an interesting idea, and one that is surely against what big companies think. The power of civilian science is numerous and far greater than any gate-kept, “professional” attempt at solving a problem. I’m curious as to your reason for including philosophers on the team?
I'm impressed by how AI is being used in healthcare. It's helping to diagnose diseases and improve patient outcomes.
Two years later and what has been said in this Ted Talk is still true on a good trajectory. Many considerations and conversations have come into play.
Artificial intelligence can really make medicines for immortality and make our earth heaven for those who are choosed , and save latest technologies
If and only if AI use implemented correctly in the right way.
Dr. Saidy’s talk regarding artificial intelligence(AI) in healthcare made the argument that AI can provide many benefits including making hospitals more efficient and can improve access to care by providing accurate decision-making tools. Interestingly, AI can factor in the outcomes of thousands of other patients to determine what will work best for the patient based on their individual circumstances by comparing to other patient outcomes with similar circumstances. This could provide insight into how physicians determine what treatment or procedure may be best for their patients given their patient’s specific circumstances. However, I argue that no two people and their specific circumstances are going to be identical and guarantee an identical outcome. AI could be used to make recommendations but there could be circumstances in that AI fails to factor into its algorithm even if AI continues to evolve over time and get better at its predictions for healthcare outcomes. An ethical consideration when considering implementing AI into healthcare is that AI systems can have a significant impact on patient autonomy and decision-making. This would impact patient autonomy if AI systems are used to make decisions about diagnosis, treatment, or clinical outcomes without human input. I think it’s important that AI systems are designed and implemented in a way that respects patient autonomy and preferences so for example, the patient still gets to decide what treatment would work best when presented with all the treatment options and the risks and benefits for each.
Also, if the AI algorithm is not up to date or if there is an issue with the learning process of the AI system, it could lead to the patient receiving an incorrect diagnosis or an incorrect treatment which would not lead to improved healthcare outcomes. These unintended consequences or errors due to relying on the AI system to guide diagnosis and treatment can put patient safety at risk and could cause harm to the patient as a result. This relates to the ethical principle of nonmaleficence which requires that healthcare providers do no harm to their patients. In order to comply with the principle of nonmaleficence, AI systems need to be designed and implemented in a way that minimizes the risk of harm to patients, and any potential harm is carefully considered and weighed against the potential benefits of using the AI system. Dr. Saidy brought up a valid point that Ai systems often don’t use data sets that represent people of all different races. Therefore, when AI predicts an outcome for someone of Asian race from a predominately white male data set, the prediction made by the AI system is likely to be less accurate. AI systems therefore can inadvertently perpetuate biases and discrimination against people if they are trained on biased or incomplete data. Overall, there are benefits to implementing AI and there are also risks and challenges that need to be further investigated before AI is allowed to fully predict and guide outcomes.
While implementation and use of a new technology can be scary, the potential for greater streamlined success is far more vast than any hesitation or fear of using Ai. With a strong and unbiased foundation, along with flexible and stringent monitoring, we could enter a new era of healthcare.
Dr. Saidy did an excellent job of showing the huge impact artificial intelligence can have on healthcare. The ability to examine data collected from thousands of patients across the world would be an invaluable tool for physicians and has the potential to vastly improve care. The barriers Dr. Saidly described that are preventing AI from becoming a successfully used tool in healthcare raise an important question. Is it ethical to resist the adoption of a tool that has such huge potential to improve care and save countless lives? By not moving quickly to change regulations and implement this game-changing new technology, are we basically killing people? Of course it takes time to change regulations and set up the right controls before we can fully start using AI. But if we as physicians and healthcare providers know something will save our patients’ lives and we don’t do everything we can to use it, I think we’re violating our ethical obligation to our patients. We should do all we can to make these tools available as soon as possible. And despite all its benefits, I think we should be careful not to use AI as a crutch that replaces critical thinking and personalized care.
Artificial intelligence has some amazing potential benefits in the health care field, with potential efficiency improvements for hospitals, assisting and guiding physicians in patient treatment regimens, as well as the greatest potential of diagnosing a patient. Dr. Navid Saidy discussed some very important complications to consider related to introducing artificial intelligence into the medical practice. Including regulations for medical devices that typically involves a physical device. Yet in the case of artificial intelligence it is a software that evolves and does not involve a static repetitive outcome. Nevertheless if artificial intelligence purpose is to diagnose, give treatment options and prognosis, it’s output has numerous outcomes which can be hard to quantify and therefore hard to regulate. Even as stated in the video the regulations change to allow for more transparency and real time monitoring, there still are risks. One of the main concerns about artificial intelligence is that the data used to create its program is biased. Since humans are the ones collecting the data, and have interpreted the data with some implicit assumptions that are then incorporated in to the system, this bias is then transferred to the artificial intelligence models and can lead to biased resorts in diagnosis/treatment. This is why it is so vital that when assessing these new technologies that the results are accurate. This leads me to a important topic to consider with the implementation of artificial intelligence; that of beneficence. Beneficence is the act of doing good by benefiting the patient more than doing harm. Artificial intelligence has a great potential capacity to reach a current diagnosis with more efficiency which could greatly benefit patients in time sensitive care. However, in the cases where the wrong diagnosis is stated, with confidence by artificial intelligence this could lead to greater harm to the patient. The capacity for AI to state when the answer is unknown and if more testing is needed, is crucial to the application of this technology. These drawbacks need to be critically supervised as artificial intelligence is incorporated into medicine. It is naïve to say that artificial intelligence won’t be a part of medicine in the future. All the same, we need to be careful and diligent in assessing the technology and outcomes for patients. It is important to remember, that part of healing comes from a healing touch and emotional and spiritual connectivity of humans. As technologies become more and more integrated in our society, we must prioritize and preserve our humanity.
Amazing information Dr. Navid. Looking forward to the future.
AI is the future of effficient care in health sector
@comment sense wow a comment coming from gutsy pseudo I'd of paralysed hemiperised brain
Kudos 👍
@comment sense have you heard of the term differential diagnosis ?
You are from the same breed who were against the usage of machines in industry ,or automotive s for travel
So say whatever you want pal
You can't stop the future from happening .
👍
Hope u keep on spamming therafter and proving yourself older
This is very interesting and I am excited to see what this has to offer. There are so many pros to this type of technology and as was mentioned, there are a lot of cons as well. It is so important that this stays highly regulated. One of the biggest issues that I could see arising are rising out of this situation is the fact that AI technology is so new. There are new problems found within the technology all the time and we are discovering new things about it every single day. The reason this is an issue is because of the consequences. When chat GPT makes a mistake It generally does not mean the life the life or death of a human being. Whereas with this technology It can very easily turn bad quickly I feel as though there needs to be more time spent in the and the world of AI before we jump to using this in A real life setting. As an example I think it would be good to use this alongside a doctor for a minimum of five years. See exactly what the doctor recommends and then compare that to what the artificial intelligence was recommending. The success rate needs to be almost perfect and these types of scenarios. Another issue that I see is in liability. If the artificial intelligence recommends certain treatments or diagnoses a patient, who is going to be liable when things go south? Is it going to be the doctor in charge because he should have known better than what the A I was saying Or is it going to be the company that generated and created the AI? Both would have strong arguments as to why it should be the other end and I feel as though this could leave the patient in a position where they cannot receive the compensation or seek justice as needed. Lastly, artificial intelligence is created by a company. For-profit companies are created to do just that, make a profit. If there are companies that are competing to have their artificial intelligence working in certain hospitals, who is to say that there will not be shortcuts taken or poor leadership that leads to disasters within the company that leads to disasters within the healthcare system. I feel as though a lot of these points that I brought up are very critical to think through before this type of technology becomes the norm. I’m sure this has been discussed many times with others but for the future of Healthcare I do hope that it is in the right hands. While a lot of things I said were geared towards the negative, I really do hope that we can see this technology working flawlessly in the future as I think it has great potential to do amazing things.
The depiction of AI in popular culture has often been one of dystopian futures, where machines rise against humanity. However, as the speaker rightly points out, the reality is far from this portrayal. AI has the potential to revolutionize healthcare, offering personalized care, streamlining hospital operations, and providing accurate decision-making tools. The example of AI's role in cancer diagnosis and treatment is particularly poignant. By consolidating data from various sources, AI can provide accurate predictions about a patient's diagnosis, treatment options, and prognosis. This is a game-changer, especially for patients like Peter, who, without AI's intervention, might have faced a grim prognosis.
However, the journey of integrating AI into healthcare is not without its challenges. One of the most significant hurdles is the existing regulatory framework, which is not designed to accommodate the dynamic nature of AI. Traditional software is static, producing the same output for the same data. In contrast, AI has the intrinsic ability to learn and evolve, making it more adaptable and, ideally, more intelligent over time. Locking the learning potential of AI models, as the current regulatory approach suggests, limits their potential and can even be detrimental to patient care.
Furthermore, the issue of data bias is critical. If AI models are trained predominantly on data from one demographic, their accuracy and reliability can diminish for other demographics. It's essential for AI developers to ensure their models are trained on diverse datasets. However, as the speaker mentions, this isn't always feasible due to the availability of data. Therefore, building a functionality where AI models can acknowledge their limitations and uncertainties is crucial.
In conclusion, the potential of AI in healthcare is immense. However, to harness this potential fully, we need to address the challenges head-on. This involves establishing new regulatory frameworks in collaboration with AI developers, healthcare practitioners, policy advisers, and patients. By doing so, we can ensure that AI serves the entire population equally, leading to a future where healthcare is more personalized, efficient, and effective.
While AI has the potential to revolutionize healthcare, there are also potential negative consequences to its use. One major concern with AI is that it may perpetuate or even amplify existing biases and discrimination in healthcare. This is because AI is only as unbiased as the data it is trained on and the way it is programmed. Since humans who create these programs have their own implicit biases, the AI will replicate it and maybe even amplify it with its efficiency. Another concern is that healthcare professionals and patients often are not educated enough to understand complex algorithms used by the AI and how it arrives at its decisions. This lack of transparency can decrease trust in the technology. Furthermore, large amounts of data is required in order to develop a reliable AI systems, much of which is sensitive medical information. This raises concerns about privacy and security risks, particularly if the breaches or unauthorized access to the data is obtained. While AI has enormous potential to improve healthcare, it is important to be aware of the potential negative consequences of its use.
Sure
What happened to ask the patients if they want this ???
They should decide…. Not the doctors not hospitals, not insurance companies or the government!!!
Patient make the decision!
I do believe that the actual stretch to which AI can help the healthcare system may be taken too far.
I can understand the importance of using AI to consolidate data, having large amounts of information ready to go when needing to refer to treatment options and such. I can see how this saves time and resources and saves us from error at times. I can also understand the importance of wanting to be as efficient as possible with many situations in medicine. But how well does AI understand the risks and benefits of each patient? How well does AI truly follow beneficence for each individual patient? AI can’t necessarily understand the emotional or mental toll certain treatments can have on a patient outside of typically stated adverse reactions.
A major problem with this is when the patient does not follow the standard of care, when the patient does not respond the way many others have to treatments, procedures, medicine etc. Dr. Saidy states that AI can even learn from these patients who did not follow the treatment and can help come up with following steps. But this is all still algorithm backed up by some data. Do we know if that data is recent? Do we know if it follows a trend and is generalizable to other places? Do we know who took this data? This can all be questions we as healthcare professionals need to think about.
Think about a medication change - we could easily train a robot to know DDIs and which medications can be mixed with one another but what happens when a patient has an allergic reaction to a new medication and needs to replace it with something else? Further yet what if that medication used to replace the one that caused an allergic reaction would require two medication changes if a new medicine were to not work will all their existing meds? Here is where we may end up spending more money or time than we thought we saved with AI. And we could have solved the allergy and or reactions faster if a human doctor was around to supervise, or think to grab a LFT’s or genetic screening for patients with different metabolizing abilities. When we have to pick up the pieces AI left because of the critical thinking, we are taking two steps back. We have to preserve beneficence, and all the though process and considerations that surrounds what is doing best for the patient.
I will say however, there are great ways to use AI, and there should be more information on specific uses such as using it t for locating the primary site of cancer. I think there is a balance between allowing AI to take over an entire patient vs allowing AI to aid us in information we cannot see or feel with the human site or touch. But when we consider places such as an ER that decisions need to be made quickly, is there a potential for doctors to rely on this information too much since they need to work quickly on their feet? Lastly, Dr. Saidy is aware of data bias and how that could skew the information depending on a patient’s information. I believe if we want to do what is best for the patient however, these tools to ensure bias does not occur are extremely important, and manufacturers should consider perfecting these tools prior to using AI on patients and potentially having the AI misdiagnose. In the case of misdiagnosis in particular, AI could potentially lead to breaking the code of non-maleficence. If a patient was misdiagnosed, chances are their treatment is incorrect for their diagnosis. In which case, we could be causing harm to the patient without knowing it. This is where again, AI needs to be used as a backup tool not the lead tool.
Navid talks about the stability of artificial intelligence and the potential to improve care for patients. While I agree that AI can be a game changer, it could improve diagnosing and care in a lot of ways. AI will be consistent, it won’t miss things that a human will because AI doesn’t have a bad day, they aren’t affected by a patient load, they aren’t worried about 40 patients at the same time. I don’t believe that AI will drastically improve healthcare, however, I believe that it could be damaging. When discussing AI there is always one thing that is left out, the human touch. Doctors care about their patients, they dedicate their lives to learning exactly how to help them, and if they don’t, they’ve learned how to learn so that they can help them. While AI does learn and grow, they don’t have a personal connection, desire for their well-being, and an emotional connection with anyone. This is what drives physicians; nobody goes into medicine for the money or for the job itself. Yes, the money can be good, but $400k of debt to pay off to become a doctor eats up so much of it. Most doctors don’t have a typical nine to five job, they don’t go home until all the patients have been seen, the charting has been done, and the staff has gone home. If an emergency comes up, they don’t get to go home until it’s taken care of. So, why do doctors go into medicine? To help people. Every doctor is there because they genuinely care about the person they are seeing; this isn’t something that AI can ever do. Care and passion can go a long way as well, when you are passionate about something, there’s nothing you won’t do to achieve what you are after, you won’t stop working towards it until you’ve accomplished what you set out to do. If a doctor can’t figure out what’s going on, he’s going to dedicate all the time he has to figure out what to do or what is going on. That’s why AI can never replace a healthcare worker. AI doesn’t know that a patient has a wife and kids, or grandkids they care for, or foster kids they have taken in, but a doctor does. Doctors live by the principle of beneficence, to do good, and that’s something AI doesn’t understand. Now, a doctor could utilize AI to help them come to a conclusion or find the answer to a question, there are ways to take advantage of technology while still taking advantage of human care.
I enjoy reading your comment, but what if A.I was trained to behave like a human, emotion's, consciousness. A.I in medicine has hope, it will drastically improve healthcare, a lot of work has to been done in order for A.I in medicine to be perfect, it might not be now but it's the future
Long overdue. Use AI to predict and medication dosage and meds (adhd e.g.) by past response.
Whatsapp then the patient simply exports the log as a txt and a LLM does analysis and etc just use GPT4 and so on (dont even have to fine-tune but it might help) as long it lerns and can be reused or benefitial for future models
This video helped me get my degree.
Just like anything in healthcare that has ever been developed and adapted, AI needs to be tested in the field with live subjects. There may be casualties and collateral damage in this approach, but that's been the playbook of medical innovation with every medication and device. Informed consent is the key and the adaptation of this technology needs to be driven by clinicians with routine feedback, not regulators. Therefore, the full adaptation of AI in healthcare will take decades and will be fraught with setbacks and perverse incentives. The road towards an AI regulated health care model is going to be very long. People don't care if AI writes a poem for them, but diagnosis and management - there's going to be trust issues and may require generational turnover for human acceptance. However, in a perfect world where nothing goes wrong - AI would be amazing for healthcare as far as accuracy and efficiency.
I agree the road will be long and expensive for Ai developed by biased and selfish individuals. It will not be with developers who make a commitment to eliminating bias in every aspect that they can, leading to a much more useful and functional Ai system. If you devote all the time to a solid set of systems, the answer/path/solution will come to fruition much faster.
Actually I want to join for this project especially to increase the health rate of cancer patient with minimize the duration of diagnosis process
Sounds great. Now if only people could afford healthcare
ai learns from the new data. For an AI dev, he/she knows how much new data is needed to fine-tune the existing model.
what is your point here?
Good news.. now we have AI to reduce physician burnout as well.
what about some rare instances where patients were told they were gonna die soon but didnt bcz of their willpower etc? does AI include - a patients will and hopes to live? their zeal for life...
Shoutout the dude in front row with the pineapple shirt
AI in medicine has the potential to bring about significant benefits in terms of improved patient outcomes, more efficient diagnoses, and reduced healthcare costs. However, there is also a risk of harm if AI is not used ethically and with caution. One significant ethical concern is the potential for maleficence, or harm caused by the misuse or unintended consequences of AI.
For example, if an AI system is not properly trained or validated, it could make incorrect or biased decisions that harm patients. Additionally, if AI is relied upon too heavily, it could lead to dehumanization of healthcare, with patients reduced to mere data points and algorithms. It is therefore essential that those developing and implementing AI in medicine prioritize ethical considerations and take steps to ensure that the technology is used safely and responsibly. The potential benefits of AI in medicine are vast, but we must also be mindful of the potential risks and take steps to mitigate them.
Just like with employees and employers, it’s all about the training
I am a newbie.. in med don't know where exactly to start , but I am sure I am interested in creating something that helps people.
There is no doubt advances in technology have improved healthcare tremendously over the years and AI is no different. AI has already been shown to improve healthcare by better patient outcomes, personalized medicine and better access through its many tools. AI can aid healthcare providers in making highly informed decisions about patients diagnosis and treatment options. In the example, cancer is complicated and different for each patient and each specific type of cancer. AI can use data from the patient and other similar patients to streamline resources and give the best possible predictions. The dark side to this and many other technologies is where is the line in the sand? What are the rules and boundaries of this new technology? How do we prevent it from being used to harm patients instead of its intended good? Who or what governing body is going to decide what is okay and what is not okay? Can the AI develop biases over time which would negatively impact care? Who is legally and clinically responsible for healthcare errors when it comes to misdiagnosis, subpar treatment or even death? I think AI shows a lot of promise as a new tool to be used by people of today but I think there needs to be an organization in healthcare that objectively as possible assess the pros and cons, boundaries and limitations and how it is mot appropriately used in this setting. I think through this lens and organization then AI can be a great tool for physicians and other healthcare workers to do good by their patients- to provide creative problem solving to their unique clinical situation.
The only jobs to survive AI revolution will be those that have very small data pools available or require massive creativity and novelty (more so than art and music)
What’s more creative than art and music?
Could someone help me in explaining me in me what does it mean Regulatory frameworks in medical terms ? Or provide me an example ?
Thanks :)
Generally could be understood as framework rules. Rules that regulate, or in a different sense, rules that moderate and basically keeps everything in check and in control. Essentially it means framework of rules that regulate the operation of things. Hope it helped.
Where did you make this presentation ,sir?!
A job model to advance the software of ai might exist
What qualifications need
AI is not having the feelings as it is not having empathy, emotions, therapeutic touch as the human beings....
Ai can saves lives…
Isn’t that how it always starts ?? Before they take lives !?! 🙄
Maybe they will make a robot nurse and solve the world's problems (doubt compassion or empathy can ever be replicated) but does anyone care?
Yes it can be replicated quite easily. Now is it real? Probably not. Does it matter? Probably not. When I go to an hospital I want to go there and get out as quickly as possible. Most nurses I interacted with weren't that empathetic in their approach. More like neutrally doing their job.
There are a bunch of very longwinded tl;dr comments here, by channels with no avatar, content or "About" info, written in the same kind of tone. Wouldn't be surprised if it's bots or the same person., or AI.
Future of pharmacy professional in this??
M in medical summarisation profession. So future career scope and what can be different areas ?
Personalised medication
3d printed pills
Sending medicats to specific target receptors in the body
Just to say a few as I know almost nothing.
@comment sense thankyou for explaining it in broad
In R&D field! Ig