Home » Programme » Speakers

Speakers

Keynote

Anita
Grigoriadis

Winston
Hide

Inna
Kuperstein

Alfonso
Valencia
Guest
Raffaele Giancarlo
Raffaele
Giancarlo
Alexander Kel
Alexander
Kel
Luana Licata
Luana
Licata
Emanuela Merelli
Emanuela
Merelli
  Francis Ouellette
Francis
Ouellette
Matthias Reumann
Matthias
Reumann
 

Anita Grigoriadis,
Department of Research Oncology, King’s College, London, UK.

Anita Grigoriadis is lecturer in Cancer Bioinformatics at King’s College of London.
She became a lecturer in Cancer Bioinformatics at King’s College London (KCL) in 2013. She received her Master’s degree at the Institute of Molecular Pathology, University of Vienna (Austria) before moving to London (UK) to do her PhD at the Ludwig Institute for Cancer Research (LICR). At the time when omics data was slowly beginning to establish itself in biomedical research, Anita started to work as a postdoctoral bioinformatician on breast cancer genomics at the LICR and at the Breakthrough Breast Cancer Centre(London) under Professor Alan Ashworth. In 2008, she joined the Breakthrough Breast Cancer Research Unit at King’s College London under the leadership of Professor Andrew Tutt, where her bioinformatics interest in researching genomic instability and immune-related features in triple-negative breast cancer started.

Interoperability of clinical, pathological and omics data to execute personalised medicine
Translational research has seen an increasing trend towards omics techniques imaging approaches, in combination with clinical and pathological data. Multifactorial data, both large in sample size and heterogeneous in context, needs to be integrated in a standardised, cost-effective, secure manner so that it can be utilised and searched by researchers and clinicians. Small- to moderate-sized research groups need to find solutions to handle and administer enormous data volumes whilst researching for new discoveries.
Here, I represent solutions to support data management, the integration of digital microscopy and pathology, and illustrate the utility of R-shiny to make high-throughput data searchable.



Winston Hide,
Sheffield Institute for Translational Neuroscience, University of Sheffield, UK, and Department of Biostatistics, Harvard TH Chan School of Public Health, Boston, USA.

Winston Hide is Professor of Computational Biology at University of Sheffield, UK.
Professor Hide graduated in Zoology from the University of Wales (Cardiff) in 1981. He attended Temple University, Philadelphia and graduated with a PhD in molecular genetics in 1991.
In 1992 he performed post doctoral training with Wen Hsuing-Li at the University of Texas, Houston, where he published his first Nature paper and in 1993 went on to train with David Pawson at the Smithsonian Institution National Natural History Museum, in 1994 with Richard Gibbs at the Baylor Human Genome Centre, Houston, and with Dan Davison at the University of Houston. In 1995 he gained industrial experience in Silicon Valley at MasPar Computer corporation as Director of Genomics.
In 1996 Professor Hide founded the South African National Bioinformatics Institute at the University of Western Cape, South Africa and was appointed Professor in 1999.
In 2007, he became visiting Professor of Bioinformatics at Harvard School of Public Health.
In 2008, Hide was the subject of a directed search and became an associate professor at the Department of Biostatistics at Harvard School of Public Health. Also, in 2008 Hide founded the Harvard School of Public Health bioinformatics Core and became Director of the Harvard Stem Cell Institute Center for Stem Cell Bioinformatics.
In 2014, Hide accepted a Chair, and became Professor of Computational Biology, at the Sheffield Institute for Translational Neurosciences within the Department of Neuroscience at the University of Sheffield.
Hide has been awarded the National President’s Award for research in 1998, was elected to membership of the Academy of Science of South Africa in 2007 and also in 2007, won the Oppenheimer Foundation Distinguished Sabbatical Research Fellowship. In 2011, he was the first recipient of the International Society for Computational Biology award for Outstanding Achievement – in recognition of his work for the development of computational biology and bioinformatics in Africa. Hide has now established the Centre for Genome Translation at the University of Sheffield and leads on bioinformatics for the Cure Alzheimer’s foundation Genome Project. His group specialises in target prioritisation, drug repurposing and biomarker discovery in neurodegenerative diseases.

Making genomics Come true: How can we achieve real acceleration of genomics into medicine?
We are now rapidly moving from single human genomes to deca-, centi- and even milligenome projects. With more ways to compare gene variation against a background, comes new methods to select variants and genes for their potential in prediction and impact for a disease. Gene hunting is still very much a fashion and genes represent tempting targets for drug development.
But like David Bowie we need to push the boundaries to embrace the growing realisation that genes work in cohorts and it is the interaction of these cohorts that drive the disease phenotype. Identifying and targeting pathways and processes that drive disease is the new black. To action discovery, we need to address ways in which to benchmark selection of disease genes, pathways and processes. In turn we need to develop more efficient (read less ineffective) ways to select therapeutics that are likely to be acceptable for real health interventions.
The talk will present how we address these challenges through commoning for data sharing, provenance, reproducibility and workflows, benchmarks for assessment of approaches, standardisation for pathway activity, and integrative approaches to discovering the relationships between therapeutic target prioritisation, network topology, pathway interaction, genome variation, disease modelling and drug repurposing.



Inna Kuperstein,
Institut Curie, Paris, France.

Inna (Faina) Kuperstein is a scientist at Institut Curie in Paris, working on systems biology of cancer and other human diseases (https://sysbio.curie.fr/). She obtained her PhD in neurochemistry at the Weizmann institute of science, Israel, where she studied molecular mechanisms of stroke and oxidative stress. Then she did a postdoctoral research in the Center for the Biology of Disease at the Flanders institute of biotechnology (VIB) in Leuven, Belgium in the field of molecular mechanisms of synaptic and neuronal toxicity in Alzheimer‘s disease.
Molecular and cell biologist with wide experience in signal transduction research, since 2009 she is applying her biological knowledge in the field of computational systems biology in the group of Computational Systems Biology of Cancer, Institut Curie. She participates in multidisciplinary projects with pharmaceutical companies, biologists, mathematicians, systems biologists and clinicians.
In particular, she specialises in construction of comprehensive cell signaling maps, their applications for analysis with high-throughput data and results interpretation.
Among others, she has initiated and is currently involved in development of web-based tool NaviCell (https://navicell.curie.fr/) dedicated for manipulations with big signalling maps. This Google Maps engine-based tool allows user-friendly navigation through big molecular map as well as integration and visualisation of high-throughput data in the context of those maps.
In addition, she has initiated and is now leading the Atlas of Cancer Signalling Network (ACSN) project (https://acsn.curie.fr/). This project is dedicated to systematic and detailed representation of molecular mechanisms involved in cancer. ACSN maps are applied for network analysis and modelling new synthetic interactions between genes in cancer, for predicting drug response and resistance mechanisms in patients.

Atlas of Cancer Signaling Network and NaviCell: Systems Biology resources for studying cancer biology
Studying reciprocal regulations between cancer-related pathways is essential for understanding signaling rewiring during cancer evolution and in response to treatments. With this aim we have constructed the Atlas of Cancer Signaling Network (ACSN), a resource of cancer signaling maps and tools with interactive web-based environment for navigation, curation and data visualization. The content of ACSN is represented as a seamless ‘geographic-like’ map browsable using the Google Maps engine and semantic zooming. The associated blog provides a forum for commenting and curating the ACSN maps content. The integrated NaviCell web-based tool box allows to import and visualize heterogeneous omics data on top of the ACSN maps and to perform functional analysis of the maps. The tool contains standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values projected onto a pathway map.
To demonstrate applications of ACSN and NaviCell we show a study on drug sensitivity prediction using the networks. We performed a structural analysis of Cell Cycle and DNA repair signaling network together with omics data from ovary cancer patients resistant to genotoxic treatment. Following this study we retrieved synthetic lethal gene sets and suggested intervention gene combinations to restore sensitivity to the treatment. In another example, analysis of cell lines multi-level omics data, interpreted in the context of signaling network maps, highlighted different DNA repair molecular profiles associated with sensitivity to each one of the drugs, rationalizing combined treatment in some cases.
Analysis of multi-omics data together with cell signaling information helps finding personalized treatments. In additional study we show how epithelial to mesenchymal transition (EMT) signaling network from the ACSN collection has been used for finding metastasis inducers in colon cancer through network analysis. We performed structural analysis of EMT signaling network that allowed highlighting the network organization principles and complexity reduction up to core regulatory routs. Using the reduced network we modeled single and double mutants for achieving the metastasis phenotype. We predicted that a combination of p53 knock-out and overexpression of Notch would induce metastasis and suggested the molecular mechanism. This prediction lead to generation of colon cancer mice model with metastases in distant organs. We confirmed in invasive human colon cancer samples the modulation of Notch and p53 gene expression in similar manner as in the mice model, supporting a synergy between these genes to permit metastasis induction in colon.



Alfonso Valencia,
Barcelona Supercomputing Center (CBS), Spain.

Disease comorbidities and network approaches



Raffaele Giancarlo,
Department of Mathematics and Computer Science, University of Palermo, Italy.

Giancarlo Raffaele is Full Professor of Computer Science, Faculty of Science, Università di Palermo. He obtained a Ph.D. in Computer Science from Columbia University in 1990 defending one of the first Ph.D. thesis about Algorithms and Computational Biology. He was awarded several fellowships, among which the AT&T Bell Labs Post-Doctoral Fellowship in Mathematical and Information Sciences and a CNRS Visiting Scientist Grant. He has also been Invited Keynote Speaker to several conferences and summer schools, including SIAM International Conference in Applied Mathematics and has held several visiting scientist positions with many research labs and Universities both in USA and Europe such as AT&T Shannon Laboratories and Max Plank Institute for Molecular Genetics- Bioinformatics Section, Berlin, Computer Science Haifa University. Moreover, he has served either as Chairman or as Member of Scientific Committees in several conferences relating to Theoretical Computer Science and Bioinformatics, such as Workshop on Algorithms in Bioinformatics, Combinatorial Pattern Matching and String Processing and Information Retrieval, ICALP, COOCON, Recomb. He is currently on the Editorial Board of Theoretical Computer Science, Algorithms for Molecular Biology, BMC Bioinformatics an BMC Research Notes. He has also been the principal investigator of several Italian Ministry of Education research projects in Bioinformatics and one CNRS Grant. Moreover, he has been a reviewer for most of the best established Journals and Conferences in Computational Biology and Theoretical Computer Science. In addition, he has also been reviewer for several National Granting Agencies, including US NSF and the Royal Society. He is also regularly consulted by Universities nationally and internationally in order to assess Faculty promotions. As for involvement in National and Local Higher Education Structures, he has been President of the Computer Science Curricula, Università di Palermo and Member of the Italian Computer Science Curricula Commission of the Italian Association of Computer Science Researchers (GRIN). He is currently on the Board of Directors of the CINI Consortium, that represents all of the academic competences in Computer Science and Engineering present in Italy. Finally, he is on the Scientific Advisory Board for Research at Università di Palermo.
His main scientific interests include design and analysis of combinatorial algorithms, with particular emphasis on string algorithms, ranging from bioinformatics to data compression and data structures and automata theory, with applications to Bioinformatics. His scientific production consists of more than 90 papers appeared in established journals and conferences. Moreover, he is coauthor of many patents, granted by the US Patent Office, in information retrieval and software maintenance.

Getting Beyond Proof of Principle for Big Data Technologies in Bioinformatics: MapReduce Algorithmic, Programming and Architectural issues
High Performance Computing (HPC) in Bioinformatics has a classic architectural paradigm: shared-memory multi-processor. With the advent of Cloud Computing, such a new way of managing Big Data has been considered for Bioinformatics. Initially, with Proof of Concept results investigating the advantages of the new computational paradigm. They have been followed by an increasing number of specific Bioinformatics tasks, developed mainly with the use of the MapReduce programming paradigm, which is in turn supported by Hadoop and Spark Middleware.
A careful analysis of the State of the Art indicates that the main advantage of those Big Data Technologies is the perception of boundless scalability, at least in terms of time. However, how effectively the computing resources are used in the Cloud… is rather cloudy, as most of the software available almost entirely delegates the management of the distributed workload to the powerful primitives of Hadoop and Spark. On a private cloud, i.e., a physical computing cluster that can be configured at will by the user, one can show that carefully designed MapReduce algorithms largely outperform the ones that naively “delegate” to Hadoop and Spark. In the public cloud, e.g., virtual clusters (for instance created via OpenStack) with a dynamic and instance-dependent allocation of physical resources, issues of an architectural nature or related to configuration of the virtual cluster, largely oblivious to the end-user, may translate in a lack of data locality that results in a poor MapReduce performance with respect to the resources used.
In order to obtain resource-effective, portable, Cloud-based software for Bioinformatics pipelines, the issues mentioned earlier must be carefully studies and accounted for, in particular to have an impact for Personalized Medicine. As a matter of fact, the need is so pressing and apparently the expected demand so high that Edico genome and Amazon have started a collaboration that makes available Bioinformatics pipelines that are highly engineered to take advantage of FPGA programmability and the Cloud. The objective is to take the already highly performing shared-memory multi-processor based solutions offered by Edico genome in order to make them “real time”. Fortunately, this is only the “high end” of the spectrum where a transition from the old HPC paradigm to the new of Cloud Computing one has gone beyond Proof of Concept.



Alexander Kel,
GeneXplain, Germany.

Alexander Kel received his Ph.D. in Bioinformatics, Molecular Biology and Genetics in 1990. He studied biology and mathematics at Novosibirsk State University and obtained his M.S. in biology in 1985. He worked for 15 years at the Institute of Cytology and Genetics, Russia (ICG) holding positions as a programmer, scientist, senior scientist and Vice-Head of the Laboratory of Theoretical Molecular Genetics. In 1995, he won the Academician Belaev Award. In 1999 he received an independent funding from the Volkswagen foundation and organized a Bioinformatics group at ICG. From 2000 to 2010, he has been the Senior Vice President Research & Development of BIOBASE GmbH, Wolfenbüttel, Germany.
During his career, he has worked in almost all branches of current bioinformatics including: theoretical models of molecular genetic information systems, sequence analysis, gene recognition, promoter analysis and prediction, analysis of protein secondary structure, prediction of RNA secondary structure, theory of mutation and recombination process, molecular evolution, databases and gene expression studies.
He is the author of more than 90 scientific publications and of several chapters in books on bioinformatics, tutorials and education materials.

Walking pathways in cancer
Huge regions of non-coding DNA in genomes are the source of high adaptability of molecular genomic systems of multicellular eukaryotic organisms (such as human) to varying external conditions. We think, that such high adaptability is provided, first of all, through structural plasticity of gene regulatory networks. Binding of highly variable complexes of transcription factors to their highly fluctuating opened chromatin regions in genome (epigenomic variations) underlies the fundamental basis for such structural plasticity of gene regulatory networks. In this talk we will discuss the evolutional advantages of such structural plasticity of gene regulatory networks as well as the high price such systems have to pay for this plasticity – terrible diseases such as cancer. We think, that often non-reversible structural changes of the regulatory networks due to an epigenomic “evolution” of genome regulatory regions cause transformations in the system switching the normal state to a disease state. We call such structural network changes as “walking pathways”. The analysis of this phenomenon helps us to understand the mechanisms of molecular switches (e.g. between programs of cell death and programs of cell survival) and to identify prospective drug targets to treat cancer. Such structural plasticity of regulatory networks observed in genomes of higher eukaryotes, in our view, is the result of an evolutional “aramorphose” towards emergence of completely new mechanism of evolution of multi-cellular organisms.
Empirical information about the interaction of transcription factors and the regulated target genes, obtained by either conventional or high-throughput methods, has been collected in the TRANSFAC database since 28 years, and statistical models inferred from this information have been included as positional weight matrices (PWMs) and made available for the prediction of regulatory sites as well. New extension includes syntax (relevant combinations) and semantics (regulated processes) of regulatory sites. Extended annotation of gene-disease associations is available in the Human Proteome Survey Database (HumanPSD), connected with signaling pathways that control the activity of TFs (TRANSPATH database). All this carefully curated information can be used in full power to analyze disease related multi-omics data using recently created geneXplain platform, which helps to decipher the molecular mechanisms of disease often on very early stages of its progression.
Finally, in this talk we will present an “upstream analysis” strategy for causal analysis of such multiple “-omics” data. It analyzes promoters using the TRANSFAC database, combines it with an analysis of the upstream signal transduction pathways and identifies master regulators as potential drug targets for a pathological process. We applied this approach to a complex multi-omics data set that contains transcriptomics, proteomics and epigenomics data. We identified the following potential drug targets against induced resistance of cancer cells towards chemotherapy by methotrexate (MTX): TGFalpha, IGFBP7, alpha9-integrin, and the following chemical compounds: zardaverine and divalproex as well as human metabolites such as nicotinamide N-oxide.



Luana Licata,
ELIXIR-IIB, Italy.

Luana Licata acquired her competencies in over 15 years of professional involvement in various aspects of applied life sciences. She has a Master’s Degree in Biology and obtained her Ph.D. in biochemistry and structural biology at the Max-Planck-Institute of Biophysics in Frankfurt am Main in 2004. Since 2006, she has been working as researcher and biocurator in the Bioinformatics and Computational Biology Unit of the Molecular Genetics Laboratory led by prof. Gianni Cesareni at the Tor Vergata University of Rome, Italy.
Her main research activity as MINT Database curator (Molecular INTeraction database) includes the curation of molecular biology and molecular medicine papers with specific focus on biological pathways and protein interactions, and the integration and validation of heterogeneous biological data. More recently she has been involved in the coordination and curation of VirusMentha, which integrates data from the major databases and the SIGNOR resource for signalling information.

ELIXIR-IIB – the Italian Infrastructure for Bioinformatics: a growing support to national and international research in life sciences
ELIXIR (https://www.elixir-europe.org/) unites Europe’s leading life science organisations in managing and safeguarding the increasing volume of data being generated by publicly funded research, and coordinates, integrates and sustains bioinformatics resources across its member states (ELIXIR nodes).
ELIXIR-IIB (http://elixir-italy.org/), the ELIXIR Italian node and infrastructure for bioinformatics, is coordinated by the National Research Council and currently includes 17 centres of excellence among which are research institutes, universities and technological institutions.
The infrastructure supports the exchange and development of skills, and the integration of publicly available and internationally recognised Italian bioinformatics resources within the European infrastructure.
ELIXIR-IIB, which aims to bring together all the Italian researchers working in the field of bioinformatics, is striving to assume a pivotal role for the national and international life science communities. This is reflected by the growing number of bioinformatics services, initiatives and projects supported or participated by ELIXIR-IIB, including H2020 grants, and the development of the ELIXIR-IIB Training Programme (https://elixir-iib-training.github.io/website/), which is building a thriving community who strongly believes that quality training in bioinformatics is essential to achieve excellence in life science research.



Emanuela Merelli,
University of Camerino, Italy.

Emanuela Merelli is Full Professor of Computer Science at the University of Camerino where she coordinates the PhD Program in Computer Science, International School of Advanced Studies.
She got a Doctoral Degree in Computer Science at the University of Pisa in 1985 and a PhD in Artificial Intelligent Systems at the "Università Politecnica delle Marche" in Ancona in 2000. Among her appointments she has been a Fulbright Scholar at University of Oregon, Computer Science Department and a Visiting Researcher at University of East Anglia, School of Information System, Norwich.
Her main research interests are in the following fields: Bio-inspired formal methods, concurrency theory, agent-oriented modelling & multi-level complex systems, topological field theory of data and new models of computation. Computational biology of RNA. Folding and Immune System Memory Evolution.
Among her achievements are bio-inspired formal languages, such as BioAgent, SHAPE Calculus, BIOSHAPE and BOSL, for modelling, simulating and analysing autonomous agents, the construction of Hermes, an agent-based middleware for mobile computing; a research program towards a new strategy for mining data through data language that turns out to be the shape language: topological field theory of data; the design and implementation of jHoles algorithm to study the connectivity features of complex networks, with application on epidermal tumor diagnosis; a new data model, Resourceome, that allows to manage declarative and procedural knowledge with a unique model whose actions connect the use of a resource to its domain.

Topological Field Theory of Data: a new venue for Biomedical Big Data Analysis
In her talk, she will challenge the current thinking in IT for the Big Data question, proposing a program aiming to construct an innovative methodology to perform data analytics that goes beyond the usual paradigms of data mining rooted in the notions of Complex Networks and Machine Learning. The method presented – at least as scheme – that returns an automaton as a recognizer of the data language, is, to all effects, a Field Theory of Data.
She will discuss, by using biomedical case studies, how to build, directly out of probing the data space, a theoretical framework enabling to extract the manifold hidden relations (patterns) that exist among data as correlations depending on the semantics generated by the mining context.
The program, that is grounded in the recent innovative ways of integrating data into a topological setting, proposes the realization of a Topological field theory of data, transferring and generalizing to the space of data notions inspired by physical (topological) field theories and harnesses the theory of formal languages to define the potential semantics necessary to understand the emerging patterns.



Francis Ouellette,
Génome Québec, Canada.

B.F. Francis Ouellette has recently (February 2017) started a new position as the Chief Scientific Officer and Vice President of Scientific affairs at Génome Québec, a not-for-profit organization that supports Genomics in Québec. Before that, Francis was the associate director of the Informatics and Biocomputing platform and a senior scientist at the Ontario Institute for Cancer Research (OICR) in Toronto, Ontario. Before his move to Toronto in 2007, Francis was an Associate Professor in the department of Medical Genetics at UBC, and Director of the UBC Bioinformatics Centre (UBiC) at the Michael Smith Laboratories. Francis was trained at McGill University (undergraduate and graduate studies), as well as the University of Calgary, McGill University and Simon Fraser University (graduate studies). After working on the yeast genome sequencing project at McGill University, he took a position at the NCBI as GenBank coordinator from 1993 to 1998. Francis also still holds a position of Associate Professor in the department of Cell and Systems Biology at the University of Toronto.
His work at the OICR involved bioinformatics training, as well as biocuration and management of cancer genomic data. He continues his bioinformatics training work with bioinformatics.ca at Génome Québec. Since his work at the NIH, coordinating the largest Open DNA sequence database in the World (GenBank), Francis has been dedicated to ensuring openness of Science: the data it generates, and the publications that report them. Not only through his work, but on the various advisory boards and editorial boards he serves on: PLOS Computational Biology Education Editor; Associate Editor for DATABASE, an OUP Open Access journal; a number of NIH-funded Open Source and Open Data resource projects: The Saccharomyces Genome Database SAB member, the Galaxy Project SAB member, The GenomeSpace advisory member; the Human Microbiome Project advisory member. Francis is also on the Elixir-Europe SAB as well as a co-chair of the H3ABionet SAB.

Open Data is Essential for Personalized Medicine



Matthias Reumann,
IBM Research – Zurich, Switzerland.

Matthias Reumann (1978) received the Masters of Engineering in Electronics with the Tripartite Diploma from the University of Southampton, UK, in 2003 and continued his PhD studies at the Karlsruhe Institute of Technology with Prof. Olaf Doessel at the Institute for Biomedical Engineering, Universitaet Karlsruhe (TH). Reumann focused on translational research in cardiac models and his PhD with summa cum laude in 2007. The research was awarded with two prestigious research awards by both clinical and biomedical professional societies. Reumann continued research in multi-scale systems biology at the IBM T.J. Watson Research Center, Yorktown Heights, NY. His work focused on creating high resolution heart models that scale on supercomputers that yielded several high profile publications in Science Translational Medicine, the Journal of the American College of Cardiology and Supercomputing. He expanded his research interest to Genomics in 2010 at the IBM Research Collaboartory for Lifesciences – Melbourne, investigating higher order interaction of single nucleotide polymorphisms in breast and prostate cancer in collaboration with Prof. John Hopper.
In 2011, Reumann build up and the healthcare research team at the IBM Research – Australia laboratory with focus areas in healthcare analytics, medical image processing and genomics. The goal in genomics was to bring next generation sequencing into a production environment in a public health microbiology diagnostic unit. Reumann moved back to Europe in December 2013 and joined the IBM Research – Zurich laboratory where his research focusses on sustainable, resilient health systems research to bridge the divide from bench to bedside to society. He received his second PhD from Maastricht University in Oct. 2017 in this field titled “Big Data in Public Health: From Genes to Society”. Reumann is associate editor of the IEEE Journal on Translational Engineering in Health and Medicine, Senior Member of the IEEE and has served on the Administrative Committee of the IEEE Engineering in Medicine and Biology Society from 2009 – 2013 as well as on the IEEE Technical Advisory Board form 2011 – 2012. His research is mentioned in editorials and reviews.

Big data and cognitive computing in healthcare: weathering the perfect storm
Big data in healthcare is experiencing the perfect storm: the volume is increasing exponentially with accelerating speed, the variety of data ranges from multi-omics information to lifestyle measures with the help of mobile devices backed by cloud infrastructures. State of the art analytical methods are generally limited by computational approaches. Furthermore, the convergence of data analytics, sophisticated modelling approaches and cognitive computing gives promise to solve the big data challenges in healthcare and lifescience. Data analytics especially in today’s omics era yield results of large volumes given computational challenges are overcome. Sieving through the results requires expert and translational knowledge. Cognitive computing can play a significant role in making transparent results. Cognitive computing tools can be used to create hypotheses to guide experimental studies but also as prior knowledge that drives data analytics. The increasing amount of data requires a larger amount of computation that can at some point only be tackled using supercomputers. In biophysical modelling we have already shown how the computational challenge can be overcome using high performance computing systems. The sophistication of computer modelling of biophysical processes has made the transition from basic research to translational science and medicine. It is feasible today that data in healthcare will be augmented by simulation of biophysical models tailored to each patient. Cognitive computing is a promising path to make the analytical results transparent. The IBM Watson™ technology allows analysis results to be represented within a global context of accumulated knowledge of published literature. To view data and analysis in that global context will not only enable verification of results, but also helps accelerate discovery and identification of, for example, new targets in drug discovery. The combination of data-driven and knowledge-based analytics in a cognitive computing environment becomes a powerful way to create hypothesis and to limit the search space so that it can efficiently be tested using traditional laboratory methods. The IBM Watson™ technology allows one to find “the needle in the haystack” of today’s big data challenge. Hence, the power of big data can only be unleashed by embracing new approaches in data-driven analysis within a cognitive computing environment. This creates a holistic view that places big data analytics into the context of the accumulated knowledge of the scientific community.


Under the Patronage of