Computational Biomedicine
The application of computer-based tools and methodologies to simulate and model the human body in health and disease is known as computational biomedicine. It is a new, rapidly growing field consisting of the entire spectrum of human biology, physiology, and disease, collectively known as medicine. This includes everything from genomics to the entire human body, epidemiology, and population health. Computational biomedicine involves molecular medicine, simulation, modeling methods, imaging techniques, and information technology.
Why do we need computational biomedicine?
From a biological cell consisting of hundreds of different
chemicals that operate together, to the billions of cells that make up our
tissue, organs, and organ systems, to our society, which is made up of 6
billion unique interacting individuals, humans are unique, complex systems with
a plethora of information to be explored. Such complicated systems do not
consist of identical and interchangeable components. Individuals have distinct
personalities and bring distinct worth and contributions to the systems in
which they participate. Therefore, it is necessary to accurately store,
identify and apply this information for both clinical and research purposes.
Biological systems span many orders of magnitude throughout
the scales, from the tiniest microscopic sizes to the biggest macroscopic
scales. Multi-scale, multi-science systems connect the genome, proteome,
metabolome, and physiome to health. Computational biomedicine has developed
incredibly sophisticated tools to model and simulate fundamental processes in
natural systems on various scales in recent years.
How have computational methods enhanced biology and medicine?
Along with its empirical legacy, the increasing ubiquity of
electronic information significantly supports the further advancement of
medicine as a data-driven, evidence-based science. Medical imaging,
electronic/computerized patient records, and automation of clinical studies
have enhanced patient data storage. As a result, medicine is forming
ever-closer ties with engineering, computer science, and statistics.
In biology, researchers have a pressing need for support,
direction, and collaboration to interpret data generated by high-throughput
genomics, including the Human Genome Project, Single Nucleotide Polymorphism
Initiative, and Arabidopsis Genome Initiative. It is not possible to
underestimate the relevance of data management and systematic and integrated
data analysis tools in biology and medicine.
For both practice and research, information technology
provides a realistic platform for improved integration of the various
biological and medical fields. As a result, we are seeing the convergence of
many fields related to the use of computation and information technology in
biology and medicine. One of the most intriguing applications of molecular
simulation in clinical medicine is determining which medicines a given genotype
or genetic variant will respond to most successfully.
The neuromusculoskeletal system is affected by a wide range
of diseases, including those that have a high disease burden, such as Parkinson's
disease, arthritis, and osteoporosis. To mention a few applications,
computational biomedicine has been used to forecast the force required to
fracture bones, simulate the force-generating process of entire muscles, and
examine diseased neuromuscular control.
Difficulties in computational biomedicine
In contrast to molecular medicine, modeling methods are more
usually of a continuum character, involving the solution of partial
differential equations in three-dimensional space and time. Solving such
equations with the proper boundary conditions is difficult; doing it quickly
and properly is even more difficult. It recommends the employment of powerful
supercomputers and scalable codes that can take full advantage of modern
computing gear. Most present algorithms and software implementations find this
a difficult assignment due to the rising complexity and variety of emerging
exascale infrastructures.
The relatively restricted computational capabilities
available within a hospital are a key barrier restricting the full potential of
simulations to assist in diagnosing and treating diseases in a clinical
setting. As patient data must be protected by law and cannot be exported to
more powerful machines, one solution is to construct reduced-order models based
on pre-simulated data that physicians can perform cheaply.
To achieve the highest throughput rates, it is necessary to
have access to the most recent supercomputers, which are based on a vast
proliferation of nodes, each of which contains large numbers of cores and
accelerators (mostly general-purpose graphics processing units), enabling
ensemble-based simulations of very large numbers of ligand-protein simulations.
However, even the world's most powerful supercomputers can't keep up with the
demands of solving the most difficult medical problems.
The challenge is that, while computing technology has
outpaced practically every other field by orders of magnitude, it is still too
sluggish to be applied to many large-scale complex systems. Worse, the processing
chips' intrinsic speed has slowed due to nearing physical limits and the need
to lower the power consumption and energy dissipation of the behemoths we use
for this study. As a result, it is necessary to continually be on the lookout
for new and innovative types of computing.
The lack of publicly accessible datasets is another problem
in biomedicine. The issue is not that researchers are unwilling to communicate
their findings. They are, on the other hand, ready to disperse them. The
problem is that such datasets are expensive to acquire, keep, and maintain in
many circumstances. Legal and privacy concerns complicate the problem.
Looking forward
High-performance computing has become an integral part of
many aspects of computational biomedicine, and it is a driving force behind
developing these thematic issues. As these resources become more widely
available and their performance capabilities improve, simulation codes must be
modified to take advantage of them.
Modeling the vasculature (the network of arteries and veins
that transport blood) is also a significant focus of computational biomedicine.
Machine learning holds potential as a less computationally demanding surrogate
for inclusion in clinically based decision support systems for identifying categories
of observed behavior. The distributed human intelligence, which entrusts the
duty of classifying medical images requiring a profound understanding of the
field to untrained persons, is a potential strategy to solve the inadequate
availability of biomedical data. Surprisingly, the findings show that the same
degree of accuracy can be attained, reducing the expenses and time required by
qualified experts.
Finally, it is important to understand that computational
biomedicine is not limited to simulating specific events that occur within the
human body; it may also be a useful tool in supporting doctors in making
diagnostic judgments in the face of ever-increasing amounts of data.
References
Coveney Peter V., Hoekstra Alfons, Rodriguez Blanca and
Viceconti Marco 2021Computational biomedicine. Part II: organs and systems
Interface Focus.112020008220200082. https://doi.org/10.1098/rsfs.2020.0082
Coveney Peter V. 2020Computational biomedicine. Part 1:
molecular medicineInterface Focus.102020004720200047.
https://doi.org/10.1098/rsfs.2020.0047
Kocheturov, A., Pardalos, P.M. & Karakitsiou, A. Massive
datasets and machine learning for computational biomedicine: trends and
challenges. Ann Oper Res 276, 5–34 (2019).
https://doi.org/10.1007/s10479-018-2891-2
Sloot, P. M., & Hoekstra, A. G. (2010). Multi-scale
modelling in computational biomedicine. Briefings in bioinformatics, 11(1),
142–152. https://doi.org/10.1093/bib/bbp038
Moreau, Y., Antal, P., Fannes, G., & De Moor, B. (2003).
Probabilistic graphical models for computational biomedicine. Methods of
information in medicine, 42(2), 161–168. 10.1055/s-0038-1634328
No comments