The Curse of Dimensionality in Data-Intensive Modeling in Medicine, Biology, and Diagnostics

With Prof. Tim Conrad (TC), Free University of Berlin, and Dr. Donal Mackernan (DM), University College Dublin.

Abstract

Until recently the idea that methods rooted in statistical physics could be used to elucidate phenomena and underlying mechanisms in biology and medicine was widely considered to be a distant dream.  Elements of that dream are beginning to be realized, aided very considerably by machine learning and advances in measurement, exemplified by the development of large-scale biomedical data analysis for next-generation diagnostics. In this E-CAM interview of Tim Conrad,  the growing importance of diagnostics in medicine and biology is discussed. One difficulty faced by such developments and shared with particle-based simulation is the “curse of dimensionality”. It is manifest in problems such as: (a) the use of a very large number of order parameters when trying to identify reaction mechanisms, nucleation pathways, metastable states, reaction rates; polymorph recognition (b) machine learning  applied to electronic structure  problems – such as neural network based potentials need very high dimensional basis sets; (c) systematic coarse-graining would ideally start with a very high dimensional space and systematically reduce the dimension.  The opportunities and challenges for scientists engaging with industry are also discussed. Tim Conrad is Professor of “Medical Bioinformatics” at the Institute of Mathematics of the Free University of Berlin and head of MedLab, one of the four laboratories of the Modal research campus. MODAL is a public-private partnership project which conducts mathematical research on data-intensive modeling, simulation, and optimization of complex processes in the fields of energy, health, mobility, and communication.  Tim Conrad is also the founder of three successful start-up companies.

In this E-CAM interview with Prof. Tim Conrad, the growing importance of diagnostics in medicine and biology is discussed, including concepts rooted in signal analysis relevant to systematic dimensional reduction, and pattern recognition, and the possibilities of their application to systematic coarse-graining. The opportunities and challenges for scientists of start-up companies are also discussed based on experience.

 

 

Atoms and ambulances

DM: Can you tell us about yourself – where you call home, where you were reared?

TC: I was born in Hamburg and more or less directly afterward we moved to the countryside by the river Elbe. I actually grew up in a large village. I didn’t really want to stay in that area, as I wanted to be connected to a wider community where one could talk and learn from others. Back then in Germany you still had military or civil service. I chose to do civil service as a paramedic assistant working in an ambulance. I think that shaped me a lot as it involved not only moving people to the hospital, but also trying to stabilize patients on the spot, and often wishing we had tools to better diagnose their difficulties. You could measure blood pressure and ECG, but otherwise, there were not many tools that you could actually use. That is still driving my current work to develop better tools, and not only now in terms of software, but also for emergency or clinical settings. From an early stage in school I wanted to understand how stuff works and not only artificial things like  cars but really at a fundamental level, what are these atoms doing, how do they form molecules, how do they move, what does it mean when  molecules come together or react, and that led to a medical direction, and molecular dynamics and statistical mechanics.

DM: Was that driven by your own curiosity, or was it in part catalyzed by someone, an influential teacher, or a friend or a brother or sister?

TC: All of the above. It was not that there was a super cool teacher that said, kids we are going to do this, and it caught our attention. And it was not that I was a geek, always trying to understand how things work. But I was always interested in the natural sciences, and our school was particularly strong in the sciences, and I had some talent there, and we had good teachers. I remember in particular a teacher who introduced us to neurosciences and its underlying mechanisms, like neuron signal transduction. All of this was random, there was no plan to become a scientist, nor was it foreseeable, but together they all had an influence later on me.

 

Diagnostics and omics

DM: One term that I have seen in several of your papers is omics. Can you tell us what it is, and how is it related to your work?

TC: Omics just plainly means “a lot of” in Greek. In a biological context, it means “a lot of something” at a particular level or layer. Proteomics is about how all of the proteins available in the human body are associated with or drive a process. Genomics basically means all of the genes that one can analyze related to a process. Their usefulness comes from being able to quantify, and in the best case scenario understand how different levels affect each other. As a first step, you can correlate phenomena with one of these ohmic levels. For example, if a disease is correlated to a particular gene in the genome, you may be able to study this gene in lots of people and work out that a mutation is responsible for people bearing this disease. You can do something similar on a proteomic level. For example, you may be able to take a drop of blood from a person, and from that drop, count or analyze all the proteins that are there including their concentrations. For instance, if I have just eaten, there should be proteins related to digestion, or if there is an infection with a particular virus or a bacteria, one should find proteins that are responsible for fighting that disease. By analyzing the proteome in the blood, I can draw conclusions about the status of a patient, and in the best case scenario, understand something about how mechanisms work. For example, suppose someone has an infection [1]. You will see immune specific proteins in the blood going up in concentration, and then see a drop in their concentration, and two days later see a wave of proteins connected to the immune response going up. You may even see patterns there and can ask medical people their significance and if there are mechanisms to be learned from all of this.

 

Classification and the curse of dimensionality

DM: So you make your measurements and get data. But you still need to characterize it. Can you tell us how this is done, for instance in terms of classifiers?

TC: A classifier tells for instance, in the case of proteins which proteins are different between two groups. So if protein A is up-regulated (that is at high concentration), and protein B is also in a high concentration and in another group both proteins are at a low concentration then I have found one rule with two components which tells apart these two groups, and the classifier is not much more than finding the difference between these two groups or even multiple groups give some dataset. The dataset should include labels corresponding to each group for training purposes. One example of such a label might be does a patient have a given disease or not [1,2].

DM: In this context, can you tell me what is the “curse of dimensionality”?

TC: This is best explained in terms of proteomics. You might have measurements of hundreds or thousands of proteins where the number of individuals sampled is far smaller, with the consequence that many protein concentrations might be assigned to either group or label. So I would need far more samples (i.e. patients) than the available dimensions or label parameters to be able to robustly classify or tell apart two groups. So if I have only a few parameters to classify, I need only a few hundred patients to learn from, and if I have dozens of parameters, I need thousands, and possibly millions of patients, and if I have hundreds of parameters, I probably need more patients than are on the planet to be able to find meaningful mathematically robust differences between the data. Roughly speaking, the more complex a system is, the greater is the training needed to build a reliable classifier.

 

Compressive sensing and dimensional reduction

DM: Compressive sensing seems to be one of the great discoveries of the last decade or so in signal processing. Can you tell us what is the basic idea and how it is related to the difficulties you just described?

TC: Suppose you take a picture or an image of a house in a forest. Compressive sensing says that the amount of information in this picture is very small, there are just two things, essentially a house and a forest, but if I express it in pixel space I have millions of dimensions. The idea of compressive sensing is that there are systems which are described in a very large number of parameters, but with the right mathematical tools, can be represented with just a few parameters. That’s a rough idea. You go further in this direction and look at a proteomics data set of a human being, which is a lot of proteins and describe it in terms of a very large number of dimensions, but maybe only three or four or five of them are important, and if you know this three or four or five, the rest through correlation in most cases can be reconstructed. A related situation is the removal of concentration parameters that are not very informative. For example, there may be proteins associated with breathing or with cell replication that don’t give important information regarding a particular disease. That is really what compressive sensing is about. The most important question is when is this possible? If you have data that is more or less randomly distributed you will really need everything to be able to reconstruct all of the information, and then compressive sensing will not work. But if you have a lot of correlations, you can with just a small amount of information reconstruct the rest, and then you can use compressive sensing [2,3,4].

Consider a  linear under-determined equation Y =A  where X is an N-dimensional  vector known to be sparse (having K non-zero elements which are  otherwise unknown), and A is an M x N dimensional real random matrix  whose rows are of unit length, and Y is an M-dimensional vector some of whose elements can be measured, and N >> M.  If M > C log(N/K) with C ~ 0.28, compressive sensing theory shows that with probability one, the non-zero elements of X can be uniquely determined through the measurement of  Y and that A will satisfy the restricted isometric property [5]. However, when X is strictly sparse, the problem to be solved is not convex, but if the sparse condition is relaxed so that the L1 norm of X is finite, the problem is convex and can be solved easily through linear programming methods. If X is not sparse, but a generalized rotation R is known such that X = R Z, compressive sensing is applicable with respect to Z and the matrix AR.

 

DM: Can you tell us the genesis of your paper “Sparse Proteomics Analysis – a compressed sensing-based approach for feature selection and classification of high-dimensional proteomics mass spectrometry data” [2], and how long did that take?

TC: That was actually quite interesting. We teamed up with one of the experts in the compressive sensing community, Gitta Kutyniok from the Technical University of Berlin, and discussed with her the possibility of using the mathematical findings that are known to work very well on image data to biological data. She was very interested in seeing how far these techniques can be pushed into that direction because it had not been tried, or at least not for the systems that we were interested in. So we tossed ideas about, and after about six months knew that the basic ideas would work. In the end, we really went for more or less an engineering approach, tried out various methods, and found out where the limits were. There were just a couple of nice ideas that were needed to be able to use those techniques that were already there to make them applicable to proteomics data, and after we realized that we could process proteomics data such that it becomes sparse, and therefore the compressive sensing theory is applicable to it, and we found out how to do that, the rest was comparatively straightforward to solve. The main outcome of that paper is that one can usefully acquire hundred and millions of signals from blood in terms of its protein content, but not that many are needed to represent the status of a human being. I believe it took an additional year and a half to wrap it up.

DM: The curse of dimensionality which referred to before is often a problem when you try to describe the properties of very complex high dimensional systems. There are questions like what are the relevant order parameters to describe a process, what is the reaction path, or what is the dominant equilibrium state or dominant metastable states. A natural question would be: can compressive sensing be used to identify the most relevant order parameters or what are the metastable states?

TC: In principle compressive sensing is applicable, but I don’t know whether someone has tried it. I would expect someone has because the comprehensive idea was picked up in many communities and many people also from the compressive sensing community basically tried to apply it everywhere. But again one of the main requirements for compressive sensing is that your signal can be represented in a sparse representation.

DM: Is there any that is a systematic way to find a representation such that the data will be sparse?

TC: There are a lot of ways to do that. One example that goes a bit beyond compressive sensing but is a similar idea, is called dictionary learning. The rough idea is that you have a signal, say an image, of a forest and a house, and you now learn patches of this image. So in the forest, everything is green, and in the house, there might be a red roof and a blue door. Basically, you learn to recognize the patches of which your image is constituted, for example, four or five patches that describe your image, so there’s the door and that’s a roof and there’s a forest and so forth. Now you’ve learned a dictionary in which your image is sparse because before your image consisted of one million pixels, and you can describe it with just five patches. Learning this dictionary obviously depends totally on the data you’re looking at, but once you have this dictionary of basic elements of which your image is constituted, you can reuse them. If you can learn these elements of a dictionary, you can represent a lot of signals in a sparse way. But if your signal is just random then you will probably not find anything reusable, and not find a dictionary. So as soon as you find that of parts of the of your signal are used more often or so, you can basically find where they are used and replace them with an index or something appropriate.

DM: Can you tell us about the work EMT network-based feature selection improves prognosis prediction in lung adenocarcinoma [6]?

TC: EMT (Epithelial-Mesenchymal Transition) is basically a process in which a cell can de-localize from its tissue. This happens often in tumor cells where they can de-localize and swim to other places in the body, and start a new tumor. EMT is one of the processes involved in metastasis. So there’s the question of how often can you see this process happening, and gain an understanding about the actual cancer progression in a patient. The question we’re asking is if we don’t look at all the possible ways that a tumor or cancer can develop, but instead focus on EMT, does it tell us already enough to make predictions about the progression of a disease using classifiers?

DM: So it just comes back to one of the earlier questions about what data you sample, but also within the data what you choose to focus on?

TC: Exactly. So we sample more or less again everything that is available, all the omics. We can look at the genomics and proteomics data, but filtered by this EMT process [7]. We know which proteins and genes are connected to EMT. For this project, we literally threw out all features that were not connected to EMT, so we just had a subset of proteins and genes, and reduced the dimensionality from say 50,000 to 250. We then asked the question whether this was enough to classify the status of a tumor or cancer.

DM: How well did it work in terms of predictive accuracy, and second is the classifier intelligible in the sense that you can actually understand what is sparse and why?

TC: Yes, you can show empirically that this method is quite accurate, comparable to what big analytic platforms would also predict We looked at the survival time of patients. In a nutshell, we could say how bad or severe their current status was. Is it so severe that the patient might die soon, say within a year or two – or is it stoppable, that is not so severe so you can treat that patient successfully with a survival time of at least five years was quite accurate. We were able to classify disease severity in terms of short, medium and long-term survival.

DM: How useful would that be in a hospital or in a doctor’s clinic, can it allow them to make clinical decisions more quickly?

TC: The hope is that this can help a medical doctor, for example, when making decisions for therapy planning. If a disease is predicted to be in a very severe state a more aggressive treatment or therapy would be appropriate, and if on the other hand it is predicted to be mild, a less aggressive treatment might be tried first. What we provide can only be one piece in a puzzle of the whole decision-making process that the medical practitioner must consider, but it might help favor one option out of a set of options.

 

Coarse-Graining and dimensional reduction

DM: Last year I had a discussion with Christoph Dellago on neural network based methods that he has been developing with students for so-called polymorph recognition, so as to recognize different forms of crystals, but also to simulate ground state quantum mechanics using so-called neural network potentials [8], which can also be used for excited states. Christoph told me that they also suffer from the curse of dimensionality, because of the very high dimensionality of the basis sets they have to use, which makes the training of the neural network very costly as you need huge samples to get decent statistics. Christoph speculated that someone in the next few years might find a way to coarse grain the basis so as to make the learning process simpler. So I was wondering if compressive sensing might be useful in that context? I guess this would imply that you have to somehow find a representation where the data is sparse. What do you think?

TC: Yes, I think that’s absolutely true. One direction that we are looking at the moment is exactly that. If you have a complex system, is there a way to write it down using the techniques of compressive sensing or dictionary learning to represent it in a lower dimension, and hopefully need less training data to classify or train your networks or whatever you interested in doing. How to do it in terms of a given system or data set, and can it be also more systematic, and is it even true? It could be that you use compressive sensing methods on a system, and it doesn’t show any structure anymore and cannot be learned because the transformation used to reduce the dimensionality of the system effectively randomized it. I think there’s a lot of potential for great research to be done and coarse-graining is definitely one of these approaches that might benefit from compressive sensing and dictionary learning.

 

Understanding cell trajectories with sparse similarity learning

DM: I noticed that one of your research projects is entitled “Understanding cell trajectories with sparse similarity learning”. How is the cell represented here?

TC: This is related to how can the cell be measured. With genomics, I can measure the activity of every single gene in a cell. If for example, we see that in a particular cell a lot of genes are active which are associated with DNA repair, we can expect that cell is really struggling to survive and is trying to repair itself. This is a particular state. There might be another cell where most of the genes that are active are associated with reproduction. So I can classify a cell into particular states, for example, it is repairing itself, or it is reproducing, or it is migrating, or whatever. And then I can try to relate these different states to one another. For example, every time a cell reproduced itself through cell division, it was followed by a state where it repaired its DNA, and then perhaps it was doing strange things with its energy consumption. More generally, I can try to understand cell dynamics by looking at the states through which it develops over time and the different paths that are followed. The key questions are: which genes are active, can I relate states to one to another, and do I understand how to reverse engineer a process. For example, if I know that cells are following a particular path that usually leads to cancer, then maybe I can do something with suitable drugs or therapy.

DM: I guess one thing if you’re trying to measure a cell you need to actually sample the cell, and that means you’re going do some violence to the cell?

TC: That’s true, especially in this single-cell business. The biggest complication is actually to get single cells because usually, one gets parts of tissue or a group of cells. To separate them is already complicated enough, so of course we can’t do that, but collaboration partners can, and once you have that, you basically make a cell burst and count all the messenger RNA (mRNA), which carries codes from the DNA in the cell nucleus to sites of protein synthesis in the cytoplasm. Basically, whenever a gene becomes active, its mRNA is transcribed and moves to sites in the cytoplasm where the corresponding proteins are produced. Counting the mRNA profile allows us to get a fingerprint of what is active in the cell at the time that it is killed.

 

Network of network-based omics data integration

DM: Can you tell me what is meant by network of network-based omics data integration and why is it important for instance in the context of cancer prediction and treatment?

TC: Suppose I take a blood sample from a person and look only at the proteins. The proteins have been produced because a gene was read somewhere and corresponding mRNA was generated, so there is obviously a strong correlation between genes and proteins. Now proteins often affect mechanisms inside the cell or the body that can change concentrations of metabolites or sugars or even other messenger systems and fats and so forth. Until ten or fifteen years ago, and to some extent even now, people tended to look at isolated levels, for example, isolated genes and interacting proteins. For each of these levels, you can draw a graph for the network, for instance, which protein influences or works together with another protein. What one now should do is see how these networks are connected. Obviously, there’s a lot of stuff that can be interconnected, but it’s not just one big network that’s connected with everything else. There is a hierarchy, but not necessarily a hierarchy as we know it. There is a top and a bottom, but the individual networks still more or less work independently, but sometimes interact with each other. This is the network of networks idea. This term was created because the analysis methods developed for isolated networks don’t really work on connected networks. In the context of medical diagnostics of cancer for example, if you look at only one level, such as genes and their mutations, or how proteins are affected, you get a very limited picture of what is happening. If you know how to integrate these different layers, you get a much better picture and tool for prediction or classification, which can even provide decision support to medical doctors, where you can tell them, I think there is a mutation on this gene that makes this cancer or tumor to evolve, but on a protein level I see a different picture. And if you put these two findings together, you may end up with a different therapy and a better option compared what you would have done if you had just looked at the genes.

 

Modularity and reusable building blocks

DM: Can you tell us what does modularity mean in biology, what are the underlying assumptions, and what are so-called reusable building blocks?

TC: Many biologists believe that a lot of proteins are in most cases working together. Take for example the DNA repair that might be required if you were too long sitting in the sun. The radiation of the sunlight will have damaged some of the DNA in some skin cells. In most cases, this is not very harmful because the cell can repair the damage using a couple of dozen proteins that basically work together. Now you can list these twelve or twenty proteins in a block, or draw a graph and put an edge between proteins when they do something together, and end up with a network saying which proteins need each other to function. From a topological perspective, you can look at the network as a graph, and there is a point where the graph is dense in the sense that you find groups of nodes that are tightly connected together and have many edges connecting each other. This is what we call a module. When you find groups of proteins that work together to do a particular task, it is believed that these groups have been more or less together for a very long time during evolution. Now you can ask are there fundamental building blocks that for example also exist across very distantly related species? A module is basically the description of a topological feature in a graph [9]. Building blocks is a term that biologists use to say in this species, these proteins for example always work together as a unit, and if you try to take them apart they will not work anymore.

DM: So reusable in this context means that you know that certain proteins are in some sense are related to each other across one species, or from one situation to another?

TC: If I have a really long list of proteins but know that many of them always work in groups, then I can, of course, make my network representations much more sparse because now I need much less information because I know that many of these things are grouped together. For example, if I know fifteen proteins are always found together in a group, it is enough to just look at a single protein to know what the rest are doing. And that brings us back to compressive sensing and dictionary learning.

DM: Excellent, that’s very interesting. Can you tell us a bit about the paper “Clinical characteristics and disease severity associated with adenovirus infections in infants and children -of a novel adenovirus, HadV-D80” [7]?

TC: Imagine a medical doctor is looking at an ill child and wishes to know whether the child has a particular viral infection, and which parameter(s) should be used to guide that decision. In a way, doctors have been doing this for decades and know from experience that they should just look at three or four or five medical signs to decide, yes it’s a virus or no, it’s just a bad headache. The question in that paper is simply if I could sample say five hundred different things, which of those parameters or dimensions are actually necessary to determine whether a patient has a particular viral infection? Moreover, often you cannot take so many samples. In that and similar papers, we were looking at the available data and trying to extract those features that are important for predictions. In this paper, we did that for a particular virus because it’s often hard to diagnose, and we were also looking at the changes in the features or signals that occur in different age groups. It can also be fatal for some infants. It turned out that there are changes, and this is also a very important finding for medical doctors. If they look at an infant of up to two years in age, or at a school kid of seven, because you may need to look at different features, even if you’re interested in answering the same question, whether this particular virus is present or not.

 

Opportunities and challenges of start-ups

DM: We now are going to have a change of gear. You’re a founder of a start-up company. Can you tell us a bit about the opportunities and challenges for a scientist to work within this context and of your start-up?

TC: I actually have founded three companies by now. The very first one was when I had just finished school. It was also my first time to get in touch with industry and work in a non-academic job and see if there was something there for me. I then decided that was it not what I really wanted and went to academia. During my academic education, between bachelor’s and the master’s, there was an opportunity from our university for students to found a start-up, with the university funding all the infrastructure. Basically, there were two students, me and a colleague who were the CEOs of the company. The question the university wanted to know was are students or people who are close to finishing their studies already capable of having or working or even leading a company or is it actually something that one should not do to students because they’re simply not ready for it. That was really the start of this, and the company grew and we had a lot of things to do. The main idea was that we would talk to industry and see if they had an interesting problem, and then negotiate a price. The project was done by people who were still studying and were interested in working on these problems from an industry point of view. After some time the university learned what they wanted, so this colleague of mine and I decided that we wanted to continue on our own and we bought the company from the university. Then he left to another company to do something in a different direction, and now it’s me left basically with this company. The main idea is still the same, we try to identify problems that industry partners or companies have and then we look at it from an academic point of view but also with a student perspective. We identify students who under the guidance of whoever, professors or me, work on a problem. There are a lot of benefits for a company. Usually, it is not as expensive as a large consulting company, or a very well established consultant. The students learn a lot, for example in academia you have weeks and weeks to study a problem and then maybe have a solution that might or might not work. They learn very early that industry works at a different speed, and for most of them, it’s an eye-opener. Afterward, some will say look I will never do that again because I can’t do that sort of thing. And others will say, look this is exactly what I want to do, and wish to finish their studies as soon as possible to go into industry. So that was the second company. And the third company we just started, is also in an academic environment. Basically, it is a platform for industry projects that start with ideas born in academia which we then try to bring to companies, and also try to give companies a platform to come into contact with academic people.

DM: If you’re producing innovations in academia, then in principle the university or possibly the inventors own the IP, but that can complicate its commercialization with industry, so how do you handle IP?

TC: There are usually two different situations or types of products. For the first type, industry comes to us with a particular problem and says, look we will pay you so everything that is invented here belongs to the company, and that is usually fine, and if you don’t like it then you just don’t do it. The second type usually has been developed within academia, and as it has been done where you work and in the time that you’re paid by the university, it generally belongs to the university. So then either the university wants to commercialize it or the university tells you that it doesn’t and you can do whatever you want with it. As there are basically these two situations, it is actually quite simple to deal with IP, because as early as possible we negotiate with the company about what should happen if something interesting comes out of the project. 10 years ago, the first time we started with this, we didn’t do that, and that led to a lot of trouble and problems and unnecessary discussions. Fortunately they never actually led to legal troubles, but we learned a lot from the experience. The lesson is that you need really to discuss even before you start sometimes about all the IP issues so that there is no bad blood at a later stage. Everybody needs to be on the same page.

 

Exascale computing and detailed simulation of biological mechanisms

DM: What are the opportunities and challenges that you expect to emerge from massively parallel computation from a medical diagnostic perspective?

TC: I think the main opportunity is that you can simulate very complex systems, and even do so without having to coarse grain, that is to simulate the system at an atomistic or molecular level to explain biological phenomena or phenotypes. For example, if there’s a mutation in a particular gene that leads to a protein which has a slightly different 3D structure say, and therefore cannot bind anymore to a particular target or the binding is reduced in strength, simulation may explain why a person has a particular disease or why a particular drug doesn’t work as effectively. I think one of the largest opportunities will be understanding disease mechanisms or general biological mechanisms in living organisms through the use of detailed simulation. That would be one of my dreams, to be able to simulate things as close to biology or medicine as possible and being able to do that on timescales that are interesting. I mean running a simulation for a picosecond is nice but we need much longer times, maybe seconds or minutes, timescales that are interesting enough from a medical perspective, and yet still can be handled at a very detailed level by a simulation engine.

DM: I am really delighted with this conversation. I learned a lot both preparing for and actually listening to you. I want to thank you very very much.

 

Bibliography

[1] Borong Shao, Carlo Vittorio Cannistraci, Tim Conrad Epithelial Mesenchymal Transition Network-Based Feature Engineering in Lung Adenocarcinoma Prognosis Prediction Using Multiple Omic Data. GENOMICS AND COMPUTATIONAL BIOLOGY, 3 (2017) e57

[2] Tim Conrad, Martin Genzel, Nada Cvetkovic, Niklas Wulkow, Alexander Leichtle, Jan Vybiral, Gitta Kutyniok, Christof Schütte Sparse Proteomics Analysis – a compressed sensing-based approach for feature selection and classification of high-dimensional proteomics mass spectrometry data. BMC Bioinformatics, 18 (2017) 160

[3] Richard G. Baraniuk Compressive Sensing. IEEE signal processing magazine, 24 (2007) 118

[4] Richard G. Baraniuk Compressive Sensing. Microsoft Research Lecture on Youtube

[5] Hong Cheng The Fundamentals of Compressed Sensing. In: Sparse Representation, Modeling and Learning in Visual Recognition. Advances in Computer Vision and Pattern Recognition. Springer, London (2015)

[6] Borong Shao, Maria Bjaanæs, Åslaug Helland, Christof Schütte, Tim Conrad EMT network-based feature selection improves prognosis prediction in lung adenocarcinoma. Submitted

[7] Patrick E. Obermeier, Albert Heim, Barbara Biere, Elias Hage, Maren Alchikh Conrad, Brunhilde Schweiger, Barbara A. Rath Clinical characteristics and disease severity associated with adenovirus infections in infants and children -of a novel adenovirus, HadV-D80. Submitted

[8] Christoph Dellago, Donal MacKernan A Conversation on Neural Networks, from Polymorph Recognition to Acceleration of Quantum Simulations. E-CAM Newsletter, October 2017

[9] Victor Mireles, Tim Conrad Reusable Building Blocks in Biological Systems. Submitted

 

Share