SANTTUcurriculum vitae
03 Apr 2019

Some Challenges and Oppurtunities for AI in Healthcare: Part-1

The amount of data in our world has been exploding, and analysing this big data to find interesting correlations or meaningful patterns is a huge challenge. Industries like telecom, transportation, retail, financial services, and insurance companies are the first to face these challenges. Recently healthcare has been identified as one area where utilising big data will be beneficial to humankind. This will allow identifying patients who are degrading into more severe stages of diseases and consequently, suggests a tailored treatment.

The big data in healthcare industry mostly consists of  veracity, numerical data i.e., time series (signals) of electric potentials from brain and muscles (EEG, EMG, ECG), image data of different modalities (CT scan, MRI, X-ray), image sequences or video data (DCE-MRI), audio data including heart and lung sounds, longitudinal time series data such as readings of blood pressure, HbA1c, eye pressure, body temperature, skin impedance and, enzymes. Using artificial intelligence for analysis and modelling of these signal trends, images and audio signals, the way they function, and the anomalies affecting them could help disease diagnosis and treatments to save.

However, clinicians nowadays are faced with the extensive task of extracting useful information from the big data of transactional records, often with millions of patient records. The level of expertise required is often beyond what a typical data manager can handle. The raising of cost in healthcare in our ageing population, coupled with the pressure to deliver quality care, would mean that clinicians will wrestle with their precious time and resources to carry out big data analytics even with the help of a data manager. However, outsourcing data analytics may not always be a desirable option due to its cost and other practical considerations such as data privacy and security. On one hand, there are specific problems in healthcare that needs to be solved; on the other, there are plenty of Artificial Intelligence (AI) machine-learning algorithms out there that could have been used to solve the data analytics problem in healthcare.

Coupled with faster computers and scalable computing resources made possible by cloud computing, to my belief there are significant opportunities and challenges for tacking big data analytics in healthcare.

 

Challenges

Access to Patient Data and Lack of Incentives

In reference to seclusion and patented rights, access to curated health data is afflicted due to the patient’s confidentiality concerns which are legally protected by various local amendments and laws. This seems to be a considerable challenge, where the stakeholders are hesitant to share the patient data despite being unidentified. Additionally, using this big data for research field and validation of such scientific, the digital object is an intricate process.  In other words, the collection of data from several hospitals after being unidentified and procuring them as a single database is never an easy job to achieve. This problem is a bit relevant and briefly explained in the next subsection.

The privacy considerations raise the issues of security. Since the big data is the consequence of escalation of information health and medical industry, it is crucial to protect this data from hacking, Cybercrimes where the data is vended for huge hard cash. In this sense, security of these curated data is deployed by data encryption, monitoring the communication medium, multifaceted validation etc. Moreover, the accessibility of increasingly globalised data is associated with continuous assessment and renewal if required.

Another ongoing debate is regarding the lack of incentives. The individual organizations usually don’t systemize the data for research purposes because they are just oriented towards the improvement of their incomes. Only a few educational institutions that provide services to ill-patients which are developed for the quality enhancement in curing the disease and knowledge improvement are providing a way for these big data applications in healthcare.

 

Interoperability

The ability of a system or a product to modify its interface in such a way to extend its service to other system or product without any restriction in present or future can be termed as Interoperability.

The point lies in how this phenomenon is a challenge for the application of artificial intelligence in big data healthcare. As quoted in one of the magazines,“ the next advancement in the  growing virtual technology is  universal access.”, efficiently describes the hidden meaning of  interoperability. Precisely, this permits the share-ability of health information among various stakeholders (providers , patients , organizations etc.) and employ them without any special efforts by the patient.

Apparently, this is not as easy to implement as it is defined. Interoperability requires high standards and controlled interfaces of data which is customised as a single unique database (big data). However, the complexity lies in the mixing of inconsistent system data , in other words creating interoperability among different existing systems.

A few terms which explain this phenomenon in brief are; Data classification and modelling, data storage and personnel. Its once said,  “two heads are better than one” i.e. to examine the potential of the AI algorithms, it is necessary to have perfect structured and classified data for modelling the issues and then for the intercession. As a consequence of classification, the problem of storage becomes a reasonable challenge to be looked upon. Sing big data is colossal, it is essential to building up a storage block for the collection of data without any interruption, having  high compatibility  and smooth access to several users. One of the solutions are reserving them in various cloud storages. Here comes the importance of cloud computing technology.

The rate of transmission of data is equally important as segregation of data. Considering the principle behind the AI models, it is necessary to have a combination of ill health patient data and healthy individuals data for better understanding and fair decisions after implementation.

Unless the above challenge is achieved, the fair utilization of artificial intelligence algorithms benefits are unattained. Considering the essentiality of interoperability, a few countries such as the US has developed EHRs (electronic health records) for partial elimination of this obstacle at the cost of decreased efficiency.

Missing values problem

Methods such as autocorrelation, cross-correlations, transfer entropy, randomization test, phase slope etc. are used to solve the regularized time series problems whose observation are sampled at equal intervals across the time. In today’s health care scenario, we are often confronted with irregular time series, whose observations are not sampled at equal intervals of time. A good example is that it is impossible to collect blood pressure samples of a patient at regular intervals of time. The irregularities also include gaps or missing samples because a patient can be absent for the test or a clinician may cancel or reschedule the appointment.

From the literature, an irregular time series can be divided into two types. One being the gappy series (missing data) and other however is the sampling in non-uniform intervals of time. The gappy series can be regularized using 1) Interpolation techniques and 2) Regression analysis. This method of filling the gaps in Machine learning is called imputation. The earlier method consists of methods such as linear, Akima-spline and cubic-spline interpolations. The standard techniques in the latter method include autoregressive models such as ARIMA and ARMA models. The regularization of non-uniform time intervals can be addressed using spectral analysis. The idea of spectral analysis is to regularize the time series by generalizing it with the Fourier transforms or wavelet transforms. Lomb-Scargle Periodogram (LSP) is the best example for this case. In LSP, we compute a least squares fit of sine curves on the data to obtain the spectrum. The obtained least squares spectrum that can be used to calculate the power spectral density (PSD). PSD can be calculated using Fourier transforms. And using inverse Fourier transforms, correlation functions relating PSD can be computed. If the least squares optimization in LSP assumes that the noise is normally distributed then it becomes equivalent to Maximum Likelihood Estimation (MLE). The spectral analysis based algorithms are specially designed for periodic signals and they might not hold good for non-periodic signals. As the underlying computation of these algorithms is based on least squares fit, they are not robust in the presence of outliers. Moreover, the computation of the correlation function is very heavy and can be complex. Nevertheless, Kernel trick could be employed, which helps to compute the correlation function through kernels. This helps in avoiding the heavy computation of correlation function.

Curse of dimensionality

The difficulties addressed during the high dimensional data validation can be termed as the curse of dimensionality. This was first formulated by Richard Bellman in the late 1950s during his research in AI. This phenomena is usually observed in high dimensional data i.e. having more number of parameters and attributes. Moreover, this curse of dimensionality becomes practically essential when the number of features cross the number of experimental units.

Precisely, the increase in dimensions results in high computational cost and time. For instance , an AI model is designed to predict the bacteria presence and its percentage in a 2D petri dish. Since it is developed for such instances the accuracy of prediction is high. But for a change, the 2D dish is replaced by a 3D container, there is a gradual decrease in prediction accuracy and increase in computation time. And from the above example, it is evident that with is an increase in dimensionality, the sample size selection is cumbersome.

 

Opportunities

It is observed that the pace at which the virtual technology ‘s intervention in the daily life human issues is increasing day by day and this has reduced the human efforts in large computational activities. Artificial intelligence can be termed as a personalized assistant to the doctors which examines several medical reports of current patients and predict the required treatment decreasing the endeavour of doctors.

These algorithms have improved the quality of treatment, disease inspection and patients contentment. Some of the opportunities that AI has proved itself are:

 

Predictive Depiction

 In this depiction, mathematical procedures and old prior data is used to evaluate the probability of future outcomes. This technique is usually utilised by CMS and other patients (who pay the price for their treatment) for counterfeit preclusion. This is engaged in order to assure the payment precisely. A pre-payment is predicted in the same way the disease is done. This prepayment is compared to the already paid bills in order to check the probability of fraud.

This can also be effectively deployed in the devising the care management schedules for patients suffering from chronic diseases such as type 2 diabetes, asthma who require continuous care and observation. This reduces the hospitalisation costs and enhances the patient satisfaction.

Ease of Decision Making

 Right care, provider, innovation are ensured with this latest technology. Patients are able to improve their living conditions for a better and healthier life. The AI techniques provide a way for ease of decision making for patients. It helps the care management sector to ensure that the patient is taking correct treatment according to the time scheduled in his/her treatment plan. These data mining algorithms gain the professionalism through analysis of several models and current data statistics resulting in virtual surgeons. Finally, these technologies can be utilised in discovering the medication for chronic and recently identified diseases. A Few of them are EHRs, m-health, e-health etc whose only motive is universal access to healthcare.

Track and prevent the medical errors

Errors may result in patient harm and cost. A retrospective health care analysis at the major hospitals in London found that 11% of admitted patients have undergone adverse events leading 8% of them to death when 48% of these events were proven to be preventable.

Predictively model patient outcomes

Delivering better prognosis (predict patients at risk for disease). Recently it was demonstrated that the feasibility of predicting heart failure cases more than 6 months before their clinical diagnosis using logistic regression and Support Vector Machines (SVM).

Rendering proactive care, (identifying patients who are at risk of developing a particular kind of disease). Prognosis based on patient similarity metrics (SimProX); similar patients were clustered with an accuracy of 91% and F-measure of 54%. Patients who are at risk of developing a particular disease were diagnosed based on the similarity index from other patients.

Provide patient personalised modelling

Historically, guideline-led approach offered convenience for care-providers as it prescribed directions for diagnosis and appropriate drugs to use. However, a one-size-fits-all approach is not necessarily desirable because not all patients respond to drugs in the same way. In addition, their required dosage may also be different. Therefore by adopting the patient personalised modelling at the point of care can potentially influence the care-provider’s opinion based on evidence and past history.

According to Mc Kinsey & Company, other benefits of health care analytics include: 1) matching treatments with disease outcomes and 2) allowing patients to effectively manage as well as track their own health.

Conclusion

Patients health data is the key to the lock of unidentified diseases, and this provides a great opportunity for research scholars which is monitored by R&D (research and development). This is a great platform for many scholars, where the complexity of disease identification and analysis require complex theories, algos, and supportive softwares. R&D also promotes the predicted modalities to devise new devices and medication.

The health-care provider can estimate the probability of a disease occurrence and it risk of affecting more population and act accordingly. This increases the efficiency of decision making not only by patients but also by providers. On the whole, the safety of the patient is ensured by the precise decision of stakeholders based on data mining outcomes. Apart from these, the providers can also advertise the preventive measures by the prediction results and hence the possibility of getting affected reduces gradually.

To summarise, the AI algorithms can inspect the current condition of an individual and predict the type of disease that has attacked him/her, draft an appropriate treatment plan, employ a caretaker to ensure that the patient is undergoing the right procedure and finally reach up to the level of patient satisfaction. These also make sure that the payment procedures are exact and are never counterfeited as the reputation of a healthcare organisation and patient contentment are of greater importance. Hence despite propitious signs of AI,  there are few obstacles that deep learning techniques have to overcome in order to the procure its sole benefits in the healthcare industry.

 

Artificial Intelligence • health care Leave a comment

Leave a Reply

%d bloggers like this: