bioinformatics – Ontogenesis http://ontogenesis.knowledgeblog.org An Ontology Tutorial Thu, 05 Apr 2012 10:28:19 +0000 en-US hourly 1 https://wordpress.org/?v=5.2 What is an ontology? http://ontogenesis.knowledgeblog.org/66 http://ontogenesis.knowledgeblog.org/66#comments Fri, 22 Jan 2010 10:26:48 +0000 http://ontogenesis.knowledgeblog.org/?p=66

Robert Stevens*, Alan Rector* and Duncan Hull

* School of Computer Science, The University of Manchester, Oxford Road, Manchester, UK
† EMBL Outstation – Hinxton, European Bioinformatics Institute, Wellcome Trust Genome Campus, Hinxton, Cambridge, CB10 1SD, UK

Defining ontology

In OntoGenesis we intend to provide biologists and bioinformaticians with the means to understand ontologies within the context of biological data; their nature; use; how they are built; and some of those bio-ontologies that exist. All of this is based upon knowing what an ontology is. This can then lead on to the motivation for their use in biology; how they are used; and so on. The definition of ontology is disputed and this is confounded by computer scientists having re-used and re-defined a discipline of philosophy. The definition here will not suit a lot of people and upset many (especially use of the word “concept”); We make no apology for this situation, only noting that the argument can take up resources better used in helping biologists describe and use their data more effectively.

In informatics and computer science, an ontology is a representation of the shared background knowledge for a community. Very broadly, it is a model of the common entities that need to be understood in order for some group of software systems and their users to function and communicate at the level required for a set of tasks. In doing so, an ontology provides the intended meaning of a formal vocabulary used to describe a certain conceptualisation of objects in a domain of interest. An ontology describes the categories of objects described in a body of data and the relationships between those objects and the relationships between those categories. In doing so, an ontology describes those objects and sometimes defines what is needed to be known in order to recognise one of those objects. An ontology should be distinguished from thesauri, classification schemes and other simple knowledge organisation systems. By controlling the labels given to the categories in an ontology, a controlled vocabulary can be delivered; though an ontology is not a controlled vocabulary. when represented as a set of logical axioms with a strict semantics, an ontology can be used to make inferences about the objects that it describes and consequently provides a means to symbolically manipulate knowledge.

In philosophy, ontology is a term with its origins with Aristotle in his writings on Metaphysics, IV,1 from 437 BCE. In very general terms, it is a branch of philosophy concerned with that which exists; that is, a description of the things in the world. Philosophers in this field tend to be concerned with understanding what it means to be a particular thing in the world; that is, the nature of the entity. The goal is to achieve a complete and true account of reality. Computer scientists have taken the term and somewhat re-defined it, removing the more philosophical aspects and concentrating upon the notion of a shared understanding or specification of the concepts of interest in a domain of information that can be used by both computer and humans to describe and process that information. The goal with a computer science ontology is to make knowledge of a domain computationally useful. There is less concern with a true account of reality as it is information that is being processed, not reality. The definition used here (and any other definition for that matter) is contentious and many will disagree with it. Within the bio-ontology community there are those that take a much more philosophical stance on ontology. The OBO Foundary, for instance, do take a much more philosophical view.

Putting the string “define:ontology” into the Google search engine finds some twenty or so definitions of ontology. They all cluster around either a philosophical or a computer science definition of ontology. This is presumably the root of the jibe that ontology is all about definitions, but there is no definition of ontology. So, we should really distinguish between philosophical ontology and computer science ontology and remove some of the dispute. Tom Gruber has one of the most widely cited definitions of ontology in computer science, though conceptual models of various types have been built within computer science for decades. Tom Gruber’s definition is:

“In the context of knowledge sharing, the term ontology means a specification of a conceptualisation. That is, an ontology is a description (like a formal specification of a program) of the concepts and relationships that can exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set-of-concept-definitions, but more general. And it is certainly a different sense of the word than its use in philosophy.” DOI:10.1006/knac.1993.1008 DOI:10.1006/ijhc.1995.1081

The most noteworthy point is that Gruber states that his definition of ontology is not “ontology in the philosophical sense”. Nevertheless, computer science ontology is still informed by the philosophical, but the goals for their creation and use are different.

An important part of any ontology is the individuals or objects. There are trees, flowers, the sky, stones, animals, etc. As well as these material objects, there are also immaterial objects, such as ideas, spaces, representations of real things, etc. In the world of molecular biology and beyond, we wish to understand the nature, distinctions between and interactions of objects such as: Small and macromolecules; their functionalities; the cells in which they are made and work; together with the pieces of those cells; the tissues these cells aggregate to form; etc, etc. We do this through data collected about these phenomena and consequently we wish to describe the objects described in those data.

As human beings, we put these objects into categories or classes. These categories are a description of that which is described in a body of data. The categories themselves are a human conception. We live in a world of objects, but the categories into which humans put them are merely a way of describing the world; they do not themselves exist; they are a conceptualisation. The categories in an ontology are a representation of these concepts. The drive to categorise is not restricted to scientists; all human beings seem to indulge in the activity. If a community agrees upon which categories of objects exist in the world, then a shared understanding has been created.

In order to communicate about these categories, as we have already seen, we need to give them labels. A collection of labels for the categories of interest forms a vocabulary or lexicon. Human beings can give multiple labels to each of these categories. This habit of giving multiple labels to the same category and the converse of giving the same label to different categories polysemy) leads to grave problems when trying to use the descriptions of objects in biological data resources. This issue is one of the most powerful motivations for the use of ontologies within bioinformatics.

As well as agreeing on the categories in which we will place the objects of interest described in our data, we can also agree upon what the labels are for these categories. This has obvious advantages for communications – knowing to which category of objects a particular label has been given. This is an essential part of the shared understanding. By agreeing upon these labels and committing to their use, a community creates a controlled vocabulary.

The objects of these categories can be related to each other. When each and every member of one category or class is also the member of another category or class, then the former is subsumed by the latter or forms a subclass of the superclass. This subclass superclass relationship between objects is variously known as the “is-aDOI:10.1109/MC.1983.1654194, subsumption or taxonomic relationship. There can be more than one subclass for any given class. If every single kind of subclass is known, then the description is exhaustive or covered. Also, any pair of subclasses may overlap in their extent, that is, share some objects, or they may be mutually exclusive, in which case they are said to be disjoint. Both philosophical and ontology engineering best practice often advocate keeping sibling classes pairwise disjoint.

As well as the is-a relationship, objects can be related to each other by many other kinds of relationship DOI:10.1186/gb-2005-6-5-r46. One of the most frequently used is the partOf relationship, which is used to describe how objects are parts of, components of, regions of, etc. of other objects. Other relationships will describe how one object developsInTo or is transformed into another object, whilst retaining its identity (such as tadpole to frog). The deriveFrom relationship describes how one object changes into another object with a change of identity. Another relationship describes how a discrete object can ParticipateIn a process object.

These relationships, particularly the is-a relationship give structure to a description of a world of objects. The relationships, like the categories whose instances they relate, also have labels. Relationship labels are another part of a vocabulary. The structured description of objects also gives a structured controlled vocabulary.

So far, we have only described relationships that make some statement about the objects being described. It is also possible to make statements about the categories or classes. When describing the elemental form of an atom, for example, `Helium’, statements about the discovery date, industrial uses, are about the category or class, not about the objects in the class. Each instance of a `Helium’ object was not discovered in 1903; most helium atoms existed prior to that date, but humans discovered and labelled that category at that date.

Ideally, we wish to know how to recognise members of these categories. That is, we define what it is to be a member of a category. When describing the relationships held by an object in a category, we put inclusion conditions upon those instances or category membership criteria. We divide these conditions into two sorts:

  1. Necessary Conditions: These are conditions that an object must fulfil, but fulfilling that condition is not enough to recognise an object as being a member of a particular category.
  2. Necessary and Sufficient Conditions: These are conditions that an object must fulfil and are also sufficient to recognise an object to be a member of a particular category.

For example, in an ontology of small molecules such as Chemical Entities of Biological Interest (ChEBI) DOI:10.1093/nar/gkm791 has a definition of alcohol and there are several ways of defining what this means. Each and every organic molecule of alcohol must have a hydroxyl group. That an organic molecule has a hydroxyl substituent is not, however, enough to make that molecule an alcohol. If, however, an organic molecule has a saturated backbone and a hydroxyl substituent on that backbone is enough to recognise an alcohol (at least according to the IUPAC “Gold Book”).

In making such definitions, an ontology makes distinctions. A formal ontology makes these distinctions rigourously. Broad ontological distinctions would include those between Continuant and Occurant; that is, between entities (things we can put in our hands) and processes. Continuants take part in processes and processes have participants that are continuants. Another distinction would be between Dependent and Independent objects. The existence of some objects depend on the existence of another object to bear that object. for example, a car is independent of the blue colour it bears. Continuants, for example, can be sub-categorised into material and immaterial continuants such as the skull and the cavity in the skull. Making such ontological distinctions primarily helps in choosing the relationships between the objects being described, as well as some level of consistency.

Capturing such descriptions, including the definitions forms an ontology. Representing these descriptions as a set of logical axioms with a strict semantics enables those descriptions to be reliably interpreted by both humans and computers. Forming a consensus on which categories should be used to describe a domain and agreeing on the definitions by which objects in those categories are recognised enables that knowledge to be shared.

The life sciences, unlike physics, has not yet reduced its laws and principles to mathematical formulae. It is not yet possible, as it is with physical observations, to take a biological observation, apply some equations and determine the nature of that observation and make predictions etc. Biologists record many facts about entities and from those facts make inferences. These facts are the knowledge about the domain of biology. This knowledge is held in the many databases and literature resources used in biology.

Due to human nature, the autonomous way in which these resources develop, the time span in which they develop, etc., the categories into which biologists put their objects and the labels used to describe those categories are highly heterogeneous. This heterogeneity makes the knowledge component of biological resources very difficult to use. Deep knowledge is required by human users and the scale and complexity of these data makes that task difficult. In addition, the computational use of this knowledge component is even more difficult, exacerbated by the overwhelmingly natural language representation of these knowledge facts.

In molecular biology, we are used to having nucleic acid and protein sequence data that are computationally amenable. There are good tools that inform a biologist when two sequences are similar. Any evolutionary inference based on that similarity, however, based upon knowledge about the characterised sequence. Use of this knowledge has been dependent on humans and reconciliation of all the differing labels and conceptualisations used in representing that knowledge is necessary. For example, in post-genomic biology, it is possible to compare the sequences of the genome and the proteins it encodes, but not to compare the functionality of those gene products.

There is, therefore, a need to have a common understanding of the categories of objects described in life sciences data and the labels used for those categories. In response to this need biologists have begun to create ontologies that describe the biological world. The initial move came from computer scientists who used ontologies to create knowledge bases that described the domain with high-fidelity; an example is EcoCyc http://view.ncbi.nlm.nih.gov/pubmed/8594595. Ontologies were also used in projects such as TAMBIS DOI:10.1147/sj.402.0532 to describe molecular biology and bioinformatics to reconcile diverse information sources and allow creation of rich queries over those resources. The explosion in activity came, however, in the post-genomic era with the advent of the Gene Ontology (GO) doi:10.1038/75556. The GO describes the major functional attributes of gene products – molecular function, biological process and cellular components. Now some forty plus genomic resources use GO to describe these aspects of the gene products of their respective organisms. Similarly, the Sequence Ontology describes sequence features; PATO (the phenotype Attribute and trait ontology) describes the qualities necessary to describe an organism’s phenotype. All these and more are part of the Open Biomedical Ontologies project (OBO) 10.1038/nbt1346.

Conclusion

In conclusion, we can say that there is a need to describe the entities existing within data generated by biologists so that they know what they are dealing with. This entails being able to define the categories of biological entities represented within those data. As well as describing the biological entities, we also need to describe the science by which they have been produced. This has become a large effort within the bioinformatics community. It has also been found to be a difficult task and much effort can be used in attempting to find the true nature of entities in biology and science. It should be remembered, however, that the goal of the bio-ontology effort is to allow biologists to use and analyse their data; building an ontology is not a goal in itself.

References

This text is adapted and updated from Ontologies in Biology by Robert Stevens. A numbered list of references will be generated from the DOI’s above in later drafts of this article after peer review.

Acknowledgements

This paper is an open access work distributed under the terms of the Creative Commons Attribution License 3.0, which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are attributed.

The paper and its publication environment form part of the work of the Ontogenesis Network, supported by EPSRC grant EP/E021352/1.

]]>
http://ontogenesis.knowledgeblog.org/66/feed 10
Semantic Integration in the Life Sciences http://ontogenesis.knowledgeblog.org/126 http://ontogenesis.knowledgeblog.org/126#comments Thu, 21 Jan 2010 15:20:03 +0000 http://ontogenesis.knowledgeblog.org/?p=126

There are a number of limitations in data integration: data sets are often noisy, incomplete, of varying levels of granularity and highly changeable. Every time one of the underlying databases changes, the integrated database needs to be updated, and if there are any format changes, the parsers that convert to the unified format need to be modified as well. This ”database churn” was identified by Stein to be a major limiting factor in establishing a successful data warehouse (Stein 2003).

Ruttenberg et al. see the Semantic Web, of which both OWL and RDF are components, as having the potential to aid translational and systems biology research; indeed, any life science field where there are large amounts of data in distributed, disparate formats should benefit from Semantic Web technologies (Ruttenberg et al. 2007).

Semantic Integration

Integrated data sources, whether distributed or centralised, allow querying of multiple data sources in a single search. Traditional methods of data integration map at least two data models to a single, unified, model. Such methods tend to resolve syntactic differences between models, but do not address possible inconsistencies in the concepts defined in those models. Semantic integration resolves the syntactic heterogeneity present in multiple data models as well as the semantic heterogeneity among similar concepts across those data models. Often, ontologies or other semantic web tools such as RDF are used to perform the integration.

Addressing Semantic Heterogeneity

Semantic heterogeneity describes the difference in meaning of data among different data sources. A high level of semantic heterogeneity makes direct mapping difficult, often requiring further information to ensure a successful mapping. Such heterogeneity is not resolved in more traditional syntactic data integration methods. For instance, in data warehousing or data federation, multiple source schemas (e.g. database schemas) are converted to a single target schema. In data warehousing, the data stored in the source models is copied to the target, while in federated databases the data remains in the source models and is queried remotely via the target schema.

However, the schema reconciliation in non-semantic approaches tends to be hard-coded for the task at hand, and is not easily used for other projects. Often, data is aligned by linking structural units such as XSD components or table and row names. Further, concepts between the source and target schema are often linked based on syntactic similarity, which does not necessarily account for possible differences in the meanings of those concepts. For instance, a protein in BioPAX is strictly defined as having only one polypeptide chain, while a protein in UniProtKB (The UniProt Consortium 2008) can consist of multiple chains. Semantic data integration is intended to resolve both syntactic and semantic heterogeneity and can allow a richer description of domain of interest than is possible with syntactic methods. By using ontologies, kinds of entities, including relations, can be integrated across domains based on their meaning. However, application of such techniques in bioinformatics is difficult, partly due to the bespoke nature of the majority of available tools.

The protein example can be further extended to illustrate the practical differences between traditional data integration and semantic integration. In traditional data integration methods, two database schemas may contain a “Protein” table, but if what the developers mean by “Protein” is different, there is little way of determining this difference programmatically. An integration project using these two schemas as data sources may erroneously mark them as equivalent tables. In semantic integration, if the two data sources had modelled Protein correctly, the differences in their meaning would be clear both programmatically and to a human looking at the axioms for Protein in two data sources’ ontologies. In such cases, once the semantic differences are identified they can then be resolved. One possibility would be the creation—by the person creating the integrated ontology and data set—of a Protein superclass that describes a Protein in a generic way. The two source definitions could then be modelled as children of that Protein superclass.

Ontology-based Integration

Integration methods based on ontologies can be more generic, re-usable and independent of the integrative applications they were created for, when compared with traditional approaches which resolve only syntactic heterogeneity (Cheung et al. 2007). Mappings between schemas in non-semantic approaches are specific to those schemas, and cannot be applied to other data sources; conversely, mappings between ontologies (and therefore to the data sources that utilise those ontologies) can be used by any resource making use of those ontologies, and not just the original, intended, data sources. Two concepts may have different names, but if they reference the same ontology term, then it may be sensible to mark them as semantically equivalent. However, this method brings its own challenges, as described in the Ontogenesis article Ontologies for Sharing, Ontologies for Use:

“The alternative approach of defining equivalences between terms in different ontologies suffers from some of the same problems, since use of owl:EquivalentClass is logically strict. Strict equivalence is inappropriate if the definitions of the classes within the two ontologies differ significantly. . . . . An alternative is just to indicate that some sort of relationship exists between classes between two ontologies by use of skos:related (http://www.w3.org/TR/skos-primer/). “

Ontology mapping, also known as class rewriting, is a well-studied methodology that allows the mapping of a source class to a target class from different ontologies. As primitive classes are used in DL to characterise defined classes (pg. 52, Baader et al. 2003), such rewriting also allows the linking of relationships (also known as properties) between the two ontologies. Mapping can be used to automatically generate queries over the data source ontologies via an core ontology using views over the data source ontologies. Additionally, mapping can be applied more generally to rewrite the required features of data source ontologies as a function of a core ontology, as described in Rousset et al. for two existing data integration systems, PISCEL and Xyleme (Rousset et al. 2004).

In the life sciences, the most common formats for ontologies are OWL and OBO. More complex semantic integration tasks can be performed using greater than two ontologies and often employ a mediator, or core, ontology which is used in concert with more than one format, or source, ontologies.

Mapping Strategies

Often, the data sources to be integrated cover very different domains, and one or even two ontologies are not sufficient to describe all of the sources under study. In such cases, there are a variety of methodologies to map more than two ontologies together. Most ontology integration techniques where more than two ontologies can be classified according to two broad mapping strategies: global-as-view, where the core ontology is created as a view of the source ontologies, and local-as-view, where the reverse is true. Global-as-view mapping defines a core ontology as a function of the syntactic ontologies rather than as a semantically-rich description of the research domain in its own right, though the level of dependence of the core ontology can vary (Wache et al. 2001, Rousset et al. 2004, Gu et al. 2008). With local-as-view, the core ontology is independent of the syntactic ontologies, and the syntactic ontologies themselves are described as views of the core ontology.

Hybrid approaches (Lister et al. 2009, Xu et al. 2004) also generate mappings between source ontologies and the core ontology. However, unlike traditional approaches, the core ontology is completely independent of any of the source ontologies. Such approaches allow both the straightforward addition of new source ontologies as well as the maintenance of the core ontology as an independent entity.

Current Semantic Integration Efforts

RDF databases are generally accessed and queried via SPARQL. Life science RDF databases include the Data Web projects such as OpenFlyData (Miles et al., submitted); Neurocommons (Ruttenberg et al. 2009), BioGateway (Antezana et al. 2009) and S3DB (Deus et al. 2008). Many others are listed in Table 1 of Antezana (Antezana et al. 2009). Some databases only use RDF, while others make use of OWL.

Databases such as RDF triple stores provide data sets in a syntactically similar way, but the semantic heterogeneity is not necessarily resolved. For instance, while Bio2RDF stores millions of RDF triples, queries must still trace a path against existing resources rather than have those resources linked via a shared ontology or ontologies (Belleau et al. 2008). Shared vocabularies (e.g. OBO Foundry ontologies) can be used to build connections between RDF data files, which would provide existing connections among data sets that could be leveraged by integration projects.

Semantic integration projects can make use of expressive logic-based ontologies to aid integration. Work on ontology mapping and other semantic data integration methodologies in the life sciences includes the RDF approaches mentioned above as well as the TAMBIS ontology-based query system (Stevens et al. 2000); mapping the Gene Ontology to UMLS (Lomax et al. 2004); the integration of Entrez Gene/HomoloGene with BioPAX via the EKoM (Sahoo et al. 2008); the database integration system OntoFusion (Alonso-Calvo et al. 2007); the SWRL mappings used in rule-based mediation to annotate systems biology models (Lister et al. 2009); and the pharmacogenomics of depression project (Dumontier and Villanueva-Rosales, 2009).

Even with improved methods in data integration, problems of data churn remain. Some projects, such as that by Zhao et al., have proposed the use of Named Graphs to track provenance and churn of bioinformatics data, such as gene name changes (Zhao et al. 2009). Ultimately, it is not just the syntax and semantics of the data sources which must be resolved, but also the challenges associated with ensuring that data is up to date, complete and correctly traced and labelled.

Acknowledgements

This paper is an open access work distributed under the terms of the Creative Commons Attribution License 3.0 (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are attributed.

The paper and its publication environment form part of the work of the Ontogenesis Network, supported by EPSRC grant EP/E021352/1.

]]>
http://ontogenesis.knowledgeblog.org/126/feed 6
Ontological Annotation of Data http://ontogenesis.knowledgeblog.org/50 http://ontogenesis.knowledgeblog.org/50#comments Thu, 21 Jan 2010 14:41:29 +0000 http://ontogenesis.knowledgeblog.org/?p=50

Introduction

This is a blogged article on the how scientific data, specifically biomedical data, is annotated with ontologies. It introduces the kinds of data that are annotated, the people who perform annotation, the ontologies used for annotation and resources such as databases which make that annotation available for users. It is not intended to be a comprehensive guide to all ontologies or databases which use ontologies.

Author Profile

Helen Parkinson is a  geneticist who was seduced to the dark side (Bioinformatics) 10 years ago. She  manages and annotates high throughput functional genomics data for the ArrayExpress database and Atlas of Gene Expression hosted at The European Bioinformatics Institute. She also builds ontologies such as EFO and OBI to annotate these data.


1. What does ‘Ontological Annotation of Data’ mean?

Good question, lets start with what we mean by data in this context. There are articles describing 58 new databases and 73 updated databases  in the 2010 NAR database issue.  These databases are necessary as the scientific technology we use in biomedicine now produces huge amounts of data. For example, a PhD student  in 1990 might routinely sequence 1 kilobase of DNA using35S sequencing technology (subject to possessing technical skills to do the experiment). Such volumes of data can be stored easily in a FASTA format file .  The same PhD student in 2010 could sequence several human genomes (subject to funding, access to a sequencing facility who will perform the experiment and ethical approval).

This presents a data and knowledge management problem. The raw data generated by the sequencer can be stored in the same file formats as used in 1990, however, the information about the genes present in the genome, their position, their function and whether they are expressed in the individual being assayed is usually stored in a database. When we consider  the phenotype of the human from which samples were taken and the purpose of the study,  and results generated by the study there are two axes of annotation to consider – that which relate to what is being assayed – the genetic content of the individual where the genes are, and what they may do, and the meta data about the individual: age, sex, physical characteristics, diseases they may have, and what was actually sampled – e.g. diseased  or normal tissue, or peipheral blood.

2. Who does the annotation?

In our example the PhD student may have done the annotation of the 1 kilobase of DNA in 1990 and PhDs were awarded for finding genes, sequencing parts of the genome and functional analysis. In 2010 the function(s) of many genes is known and this information is reported in the scientific literature as free text. Free text can be searched effectively but the information on gene function is  more useful when it it is organised and the knowledge linked to the gene information. The most commonly used ontology in Biomedicine is the Gene Ontology or GO which has the “aim of standardizing the representation of gene and gene product attributes across species and databases”. The gene ontology is built by a team of specialist bioinformaticians who structure the ontology, provide definitions and generally ensure that it is fit for purpose (add a ref to the GO chapter). GO is used  by curators of model organism databases like Zfin or domain specific databases like Uniprot to annotate genes.

3. Why do they do it?

GO is used to describe gene products in a formal and structured way. As  Gene products have common functions across species, there are many proteins in more than 20 species are annotated to the GO term ‘transcription factor binding‘ in Uniprot (a database of proteins)  . Transcription factor binding is a high level term, it has 9 direct child terms, each of which also have child terms  linked by is-a  relationships. The structure of the molecular hierarchy of the GO allows subsumption queries which traverse these relationships and representing more or less specific knowledge about biological processes as the hierarchy is traversed. The GO enforces an ‘all paths to root must be true’ rule so the terms and their relationships represent a statement of biological truth based on available knowledge. E.g.

transcription factor binding is-a protein binding is-a binding is-a molecular function

More or less specific annotation can be specified by a curator selecting a term from lower or higher up the hierarchy. Annotations are made to GO based on scientific literature, automated analyses based on sequence homology and assertions made by expert curators. Annotations change over time  on the basis of emerging biological knowledge, and the content of the GO also changes as terms are added, or removed, annotations are therefore updated periodically.

4. Sample annotation vs. gene annotation

In the example above we considered gene specific annotation and explored the use of the GO in the context of protein databases. Now let us suppose our PhD student has several human cell lines and is sequencing  these to compare difference in expression of genes in these samples.   We saw that GO provides annotation on processes, function and cellular compartment, so what sort of annotation about these cell lines is important and why?

Cell lines can be immortalized and in this case are derived from diseased tissue in a human  and are used as a model system for investigating the disease process. Cell lines are commerically available from centres such as ATCC, who provide rich information about the cell type, growth conditions and disease of the donor.  This information is expressed as free text in ATCC, and some of this text has been structured into an application ontology called EFO. This allows us to identify all cell lines which are derived from cancer samples if the EFO terms are mapped into available data sets. The relationships between concepts relating to cell lines is shown in the figure below and are represented in Manchester Syntax.

‘cell line’ and derives_from some ‘organism part’

‘cell line‘ and bearer_of some ‘disease state’

‘cell line’ and derives_from  only from some ‘species’

cell line and derives_from some ‘cell type’

Once we have this information for our cell lines of interest and these are mapped into an appropriate dataset we can combine this information with the gene annotation using GO and expression data and perform complex queries. For example: which human genes  annotated as having GO process ‘cell adhesion’  are over-expressed in cell lines derived from cancer cells.

This type of query thus requires multiple ontologies, mapped into two different datasets and a GUI to visualize the result of the query, or some programmatic access. In this example the annotations were mapped to sample data (provided by biologists like our PhD student) by the ArrayExpress curators, and the gene annotations were provided by the GOA curators at the EBI and data is visualized by the Atlas of Gene Expression at the EBI. Ontologies therefore can be made directly interoperable via application ontologies or via data.

5. Tools for applying ontologies to data

We have already discussed two user groups – specialist curators who build and annotate to GO  and an application ontology and our PhD student who is annotating their own data and consuming existing GO annotations. This suggests we need different types of tools for these two types of users who have different skill sets.

Where can I get GO annotations?

GO annotations are available from many different resources, a complete list of tools that search the GO is maintained by the Gene Ontology Consortium, many of these tools also provide links to proteins annotated to GO terms.

What tools can I use to annotate my samples?

In our example of samples annotated with cell lines the annotation is made in the context of the submission to a database and annotation is performed by curators who use lexical matching tools combined with manual curation. There are also data submission and annotation tools such as Annotare

How can I search ontologies?

The BioPortal and Ontology Look-up Service OLS search tools provide access to multiple ontologies which can be searched singly, or combinatorially  for common concepts such as ‘fibroblast’

Can annotation be automated?

Human curators are expensive, highly skilled individuals and the volume of data is growing beyond the ability of existing curators to annotate it. There are a number of attempts to automate annotation using text mining tools such as Textspresso and Whatizit and curator support tools also use this technology.

6. Conclusion

Data is annotated with ontologies by both biologists and specialist curators who both use and create ontologies for this purpose. Annotation is made available by databases which offer GUIs for searching and programmatic access via APIs. Some data is automatically annotated using text mining tools.

Acknowledgements

This paper is an open access work distributed under the terms of the Creative Commons Attribution License 2.5 (http://creativecommons.org/licenses/by/2.5/), which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are attributed.

The paper and its publication environment form part of the work of the Ontogenesis Network, supported by EPSRC grant EP/E021352/1.

]]>
http://ontogenesis.knowledgeblog.org/50/feed 3