Tag Archives: bioinformatics

Is "how to do bioinformatics" the major topic in bioinformaticians online reading habits?

Sifting through my website stats, I realised that bioinformaticians are reading more posts discussing “how to do bioinformatics” than the ones with a strict scientific content. Is this a feature of this blog, or does it reflect a common problem with working habits?

Drawing some conclusions after two years of atcgeek

After almost a couple of years blogging on atcgeek, I can dare to say a thing or two about this experience. If I scroll the statistics of this blog, I cannot really complain about the interest generated in the readers. Even though I won’t become famous by writing here, I can tell that 38k views from January 2014, and peaks around 1k views/day is not a bad result, considering the long pause I had to take whilst moving to Barcelona and starting my PhD. Nothing really special, but not even a disastrous failure.

The three main topics at atcgeek

Although I use to divide my posts into thematic categories (bioinformatics, biochemistry, structural biology, etc.) and into types of article (news, insights, video, hacks and personal blog), I realised that I basically tend to write on three topics: education and work practices, methods and reflections. The posts of the first kind are about “how to work in bioinformatics” or “where to learn the basics”. The second ones are the one in which I report the new methods that have recently published, and the third category recoils the posts that propose scientific insights on the role and the nature of computational and theoretical biology.

The most of the interest goes to educational and work habits posts.

The order in which I mentioned these three topics, coincides with their ranking in terms of interest generated. Education and work practices come first, methods are the second ones, and the bronze medal goes to the insights. Swiftly and boldly comparing my site statistics with the interest generated on social network, I can dare to say that the people who read atcgeek are particularly interested in discussing about how to improve their working habits, how to start working in bioinformatics, or to share a bit of self-irony with me as I talk about the shit I use to do when I work. Take it as an impression that is barely supported by statistics, but plausible enough to put a question.

Based on what I see on atcgeek, people is more into discussing how to do bioinformatics or how to learn the basics, rather than the bioinformatics itself, and there could be some reasons behind this.

Of course, we should keep clear that this blog is written by a PhD student who is sharing his experience while walking the first steps in computational biology. This is a point, since anyone would be more interested in the opinions of someone more influential than me for anything about the “scientific part”. The main goal of this blog is to horizontally share my experience and to productively interact with my visitors, more than claiming to be an expert of the field and aiming at “coaching” the readers. On the other hand, anyways, if experience matters, it should matter in both the topics, since the thoughts of an experienced scientist are more evaluable than mine in both work habits and in science.

Do we have a problem with how to do our work?

Despite the shift I am seeing in the readers’ interest may be due to the characteristics of this blog, I still have the feeling that “how to work” is the major “hot topic” in bioinformatics community, and we may strongly suspect that this reflects a problem. Bioinformatics is basically the domain of non-computer scientists working with computers, the merge of two super-rapidly changing sciences, and the development of proven, shared and consolidated work strategies is far to be a reality, especially if compared with experimental biology, were lab practises are widely discussed and protocols are consolidated.

There is one last thing to say. In the real ranking of the most visited post, the most read one is not really about bioinformatics. Let’s say that this discussion should be focused on what bioinformaticians use to read online when they are keen to read about science. Including the other interests could be puzzling.

BTW, thank you for the interest in this stupid diary.

The four most stupid things I have ever done in bioinformatics.

It was a cold November morning, year 2011. Sapienza University has a huge campus next to the city centre of Rome, where the main faculties are stored in huge buildings in the rationalist style. Yet, the faculty of Biochemistry has a detached site in the neighboured flanking the campus, San Lorenzo. I was crossing the streets of this wonderful ex-industrial alternative hood to reach my new lab. The clock was marking 10:30 AM, and I was joining bioinformatics. Professor Stefano Pascarella had accepted to supervise me in my master thesis, and it was my very first day. Four years have passed, I have graduated, worked in five different labs, and even if my experience is not really long, I think I have already a couple of stories to tell.

Stupidity matters. Despite the most of the people use to link science to intelligence and genius, seeing research as a matter of the “smart guys”, we must admit that the lab routine is often studded with the crap we make, and that researchers can become protagonists of actions of remarkable stupidity. And if we scan the first, faltering steps of a researcher’s career, we may find a couple of funny nerdish stories to tell with colleagues in a bar. And since I’d be so sorry to know that someone of you may run out of funny anecdotes about grad students’ stupidity to tell, let me report the four most stupid things I have ever done in bioinformatics.

Trying to fetch information from uniprot on 1750 genes without any programming

Munch_The_Scream_lithography
The first task of my master thesis was simple. My advisor provided me with a list of 250 uniprot IDs of MocR proteins in several bacterial genomes. Helix-turn-helix transcriptional factors, with an amminotransferasic domain allosterically regulating them by pyridoxal-5’-phosphate binding. The lab had identified these sequences with HMMer, and we wanted to know something more about the flanking regions. The professor told me to annotate 3 upstream and 3 downstream coding regions in order to see wether some recurrences could indicate a conserved multigenic region; simple and straightforward.

The next day I was shattered, reclining a lost look on my screen, at 8 pm and after ten hours of work. A hard lesson that I have learned by the time, is that if you did something wrong in designing your bioinformatics workflow, a spreadsheet will show up at a certain point. I was staring at an OpenOffice Calc window with about 40 rows, and had managed to find a way to manually scan the flanking region. I don’t remember exactly my glorious strategy, but it should have sounded like this:

  1. Copy and paste the id on uniprot and search it.
  2. Scroll the way down to the crosslink pointing at a graphical genome browser and open it.
  3. Perfect, you are on the spot! Now move the browser forward and back, you will find the flanking sequences.
  4. Select any flanking gene in the interval and make your way back to uniprot
  5. Save the information you get (the Uniprot ID basically) on a spreadshit and go on

I was then suggested to stop doing this and go further with studying python. That was the day when I learned that there is no bioinformatics without programming.

The protein-DNA docking to fetch promoters.

Doc-BrownAfter the first explorations, the final goal of my M.Sc. thesis work became the identification of a conserved promoter region upstream the neighbouring genes pdxS and pdxT, coding for the two subunits of the pyridoxal-phosphate polymerase holoenzyme in bacteria. This memory tastes a bit sweet, as usual when you end up remembering how naive you were when just a newbee. It was the early 2012, January or maybe Feburary. During a lab meeting, I argued that a good option to find our promoters was to perform a docking analysis on a set of candidate promoter sequences, docked with the MocR transcriptional factor that was found activating their transcription. After having explained my point, I realised that anyone was just looking at me with dismay. Do you know that awful feeling of anyone in the room looking at you like you’re crazy? I was explained that the methods developed for protein-DNA docking were still too ineffective to fetch a reliable result. Protein – DNA docking to infer the binding region of an HTH? Pure science fiction. At least, that day I have been introduced into one of my favourite topics in bioinformatics: the communication between DNA and proteins.

Declaring profanities as variables in your code.

1142382632_swearing_xlargeEven if I am quite used at threading jokes in my code, taking it as a “nerdish rebellion” against my even more nerdish work routine, what I am going to tell here didn’t actually happen to me. I include this story I have heard of in my post because it’s really worth reading.

In team-working sharing code is fundamental, and the best habit you can take is to write variables in a human language, and to write proper comments in order to get the people who will read your code to understand it (to any possible extent). Anyway, the first thing you should care about before sharing your code is to make sure that it won’t worsen the opinion your colleagues have about you.

This story has all the ingredients that a good academic joke needs to succeed: a polite and old-mannered thesis director, a graduate student with a sense of humor that his advisor won’t get, swear words, profanities, and a Perl script to show them up.

Stefano Pascarella is not old at all, but he is still the kind of super-mannered and polite Italian professor. I worked in his lab for two years long, and never heard him yelling at anyone or just expressing disappointment with harsh. Quite remarkable, since he was my thesis advisor. Instead, I never met the student who’s the protagonist in this story, and I can just assume him as the typical 20-something master student. The only thing that I am pretty sure about him is that one day he wasn’t at the lab, and his code was needed for some reason.

Professor Pascarella sat down in front of the terminal and rapidly found the file he needed. The people who told me this story just can’t forget the expression on professor’s face. A calm and bored expression ran immediately into a serious face, that swiftly faded into disconcert. Any given variable of the code he was reading was either a bad word or a profanity.

Later on that day, the student received a mail “kindly asking” him “to take his coding routine more seriously”.

Ignoring the find/replace function in a text editor.

maxresdefaultOk, I am figuring out what you are thinking. “This moron didn’t know that text editors had a find/replace function and corrected a whole code manually to change a single word”. Not so, I did something that is possibly worse. When I started to write code, actually I did not know much about the existence of this amazing function in my text editor, but I was still very sure that the process had to be automatised. My ignorance on text editors mixed dramatically with my inclination to programming to give rise to one of the most stupid things I have ever done.

As I finished and tested the script named changeword.py, I was totally sure that it was one of the best things I could produce with my short programming experience. I don’t really remember the code, but it should have sounded as follows:

#! /usr/bin/python
import sys
filein = sys.argv[1]
word_to_change = sys.argv[2]
replacement = sys.argv[3]
a = open(filein,’rU’)
b = a.read()
a.close()
print b.replace(word_to_change,replacement)

To run it, you just needed to input the file and the word you wanted to change with its replacement, and anything went to the standard output:

$> ./chageword.py my_file.txt first_word second_word > my_corrected_file.txt

Et voilà, the text came out changed. Luckily, at a certain point I realised that my fantastic script didn’t work for any change I could need, and decided to discuss this problem with a postdoc in my lab. He is still laughing about this.

Write the MD5-checksum code on the same file from which I extracted it.

MKSB023_UselessMachine_Animation_largeFatigue plays tricks, and makes a perfect source of inspiration for stupid actions. When you are tired you can experience severe logical failures, and brightly shatter your work in seconds.

This happened a few months ago. Tracking your input, output and script files is very important, and even if we are not used at version control systems, annotating any file with its MD5 code may help, to some extent, in having a better tracking of your work.

The MD5 algorithm assigns a unique code given an input. If you input a file to the MD5, the output code will correspond to that file univocally. Of course, if you modify the file the resulting MD5 code will change.

I was finishing a long scripting course and was adding information on my output tabbed file in an hashed header. As I calculated my MD5 code, I had the brilliant idea to write it on the same file from where I extracted it. Not to mention that after having pasted the MD5 code on the file, the MD5 code of that new file inexorably changed.

It took to me a good quarter of hour to realise it. It was 9 PM, and I thought it was just my brain asking me to go home for some rest.

As I said at the beginning of this article, stupidity matters. And ironising at yourself matters even more. Cognitive work requires the application of all your rationality, and it is thus fundamental to understand its limits, or else the borders of your intellectual skills that are shaped by stupidity. I think that there is no shame in recognising you own limits, and publicly admitting them is someway therapeutic.

Quoting an Italian PhD student I have met at my department who recently graduated, “there is no use for a PhD course except in the light of understanding how stupid you are”. I have recently registered for my second year of PhD here at the CRAG, and still have a long way ahead to explore the deepest corners of my stupidity.

After all, the Diesel advertisement showed as heading image of this post, may be right. You are stupid only if you try to explore your limits. And this is right about what I am up to.

Happy birthday mr. GNU

It was the early eighties, a day like any other at the MIT. And a printer was not working. The Artificial Intelligence Laboratory programmer Richard Stallman did his best to fetch the source code of the driver from the manufacturer to fix it, but there was no chance. The code was closed, and this was definitely a huge problem. Because if we give up sharing our work, we cease to work for the common good. And this should never happen in science.

All of a sudden, something as simple as the possibility to modify a driver became the symbol of an epic struggle. The struggle between greed and generosity, individualism and solidarity, profit and redistribution, patents and free knowledge and, to some extent and in a more philosophical fashion, between capitalism and anti-capitalism.

It was the September 27th 1983, and Richard Stallman was announcing his challenge to the world: ensure that the source code flows freely. The GNU project was born.

Over the years, a huge crowd of any kind of programmers joined the movement, rising the flag of free knowledge as a means for the redistribution of wealth, and for the spread of democracy. A lot of admirable and romantic ideals that shocked the world as they proved to be effective enough to beat up the informatics bad guys. Although the efforts of software majors to promote their closed and patent- based way to software, the free software movement has been the one to dictate the metrics and trace the groove of many aspects of the evolution of IT market. The encounter with Torvald’s kernel linux, the birth of the main distribution projects, the extension of free software principles to all the aspects of cognitive production that led Lawrence Lessig to found Creative Commons in 2001. Year by year, open source software have spread over, becoming the standard for almost everything that is leading the internet nowadays,  including Google and Facebook.

A lesson that we still need. Openness is fair, and it is productive. As the debate on Open Science spreads up, the example of Free Software still traces a way we must follow.

Tune up your pipeline with Luigi, the Python module to manage workflow used in Spotify.

Yesterday I have found an amazing audio comment on Nature’s Arts and Books blog that was discussing a possible influence of music on the development of modern science. Among the many connections we may find between science and music, the one I am going to propose today turns out quite unexpected.

I understand that pipeline development is overtaking the discussion in this blog, and this could actually result quite boring. That’s because I have to face my very first big project in genomics, and I am in the need to explore the best solutions and strategies to manage complex workflows. So, as I already discussed some Python solutions for pipelines and the NextFlow DSL Project, let me take some lines to talk about Luigi.

Luigi is a Python package to build complex pipelines of batch jobs. Long batch processes can be easily managed, and there are pre-designed templates. For instance, there is a full support for MapReduce development in Python, that is the only language used in Luigi. All the workflow can be monitored with a very useful graphical interface, providing a graph representing the structure of your pipeline and the status of data processing. More information and downloads are available on the Luigi GitHub page.

How is this related with music? Well, the picture above displays a romantic view of what music was in the past. Nowadays, anything is managed as a big data thing, tunes and chords are transformed into bit, with a cruel disregard for any romance. Luigi was developed in the context of the very famous (and my favourite) music application Spotify. The main developer, Erik Bernhardsson, is a NYC-based computer scientist who headed the Machine Learning Division at Spotify for six years long.

So, we can actually agree with Kerri Smith’s point on Nature: music influences scientific production. Sometimes is a matter of cultural environment, sometimes is a matter of data science.

Post-pub integration. I was informed on twitter about this page with examples of Luigi usage. Think is worth to be mentioned. Thanks to @smllmp

Project Rosalind: learn bioinformatics and programming through problem solving

We could agree that a bioinformatician is basically a naked, starving castaway who’s trying to survive in a desert island. As in one of those realities that run on tv, or in the movie starring Tom Hanks, he is provided with a knife, quite a few clothes, and a good dose of motivation. In this allegory, the island is the computational research in life sciences, the knife represents the programming and mathematical skills, and the few clothes are the biological knowledge. As a castaway, the main occupation of the computational biology is to solve problems, doing the best to build new tools, explore the environment, fetch food (or a fair amount of coffee), and grow his/her knowledge.

Many educational programs in bioinformatics, both at academic and open-course level, are oriented in providing the basis for the computational work, the programming skills, the minimum biological knowledge, and statistics. In our story, this would mean that the most of the programmes you are going to meet will just provide you with the knife and a couple of tattered clothes.

This is the reason why I was really amazed when I discovered Rosalind, a website proposing a bioinformatics training system that is oriented to problem solving. The training is organised as a game. You subscribe with you email, and they propose you to solve bioinformatics problems at different level of complexity. Problems are divided into several topics, and any problem will give you points if solved, with no penalisation for failure. Remarkably, and despite any expectation, this doesn’t look a website for students only. The diversity of problems proposed and the number of ambits involved are really high, and even experienced bioinformaticians may find this website really useful to learn new things. More, there is an option available for lecturers to apply for a professor account and use Rosalind to generate exercises to propose in classes.

The project is carried on by a Russian- American collaboration between the University of California at San Diego, and Saint Petersburg Academic University, along with the Russian Academy of Sciences. It is inspired by an handful of e-learning projects that are oriented to provide a problem-solving platform on the web, such as the Project Euler and Google Code Jam.

Luckily, computational research in biology can be represented only partially by the castaway allegory. In deed, as you do bioinformatics you are not in a remote island, as you can enjoy the communication with other scientists, and the (more or less) free learning resources available on the web. And even if you may feel alone in your island sometimes, with dirty and torn clothes on and a blunt knife in your hand, you can still lean on some comfort and help. In this optic, we may assume projects like Rosalind as a nice volley ball friend keeping you up during the darkest nights.

MethylMix: an R package for identifying DNA methylation-driven genes

The paper I am going to explore today introduces MethylMix, an R package designed to identify DNA methylation-driven genes. DNA methylation is one of the processes that are more extensively studied in biomedicine, since it has been found as a principal mechanism of gene regulation in many diseases. Although high-throughput methods are able to produce huge amounts of DNA methylation measurements, there are quite a few tools to formally identify hypo and hypermethylated genes.

This is the reason why Olivier Gevaert from Stanford proposed his MethylMix, an algorithm to identify disease-specific hyper- and hypo-methylated genes, published online yesterday on Oxford Bioinformatics.

The key idea of this work is that it is not possible to lean on an arbitrary threshold to determine the differential methylation of a gene, and the assessment of differential methylation has to be made in comparison to normal tissue. Moreover, the identification of differentially methylated genes must come along with a transcriptionally predictive effect, thus implying a functional relevance of methylation.

MethylMix first calculates a set of possible methylation states for each CpG site that is found to be associated with genes showing differential expression. This set is created by comparison with clinical samples and using the Bayesian Information Criterion (BIC). Then, a normal methylation state is defined as the mean DNA-methylation level in normal tissue samples. Each set is compared with the normal methylation state in order to calculate the Differential Methylation Value or DM- value, defined as the difference between the methylation state with the mean DNA-methylation in control samples. The output is thus an indication of which genes are differentially methylated and differentially expressed.

As mentioned, the algorithm is implemented as an R package, it’s already included in the Bioconductor package section.

Software for pipeline creation with Python.

We already had the chance to discuss about the importance of reproduciblity in computational research, and to comment some good practices to improve it. As we read the Ten Simple Rules that Sandve and co-workers proposed the last October, we cannot help but underline the importance of pipelines. A correct pipeline- based approach will prevent researchers from potentially harmful manual interventions on data, and to get them to have a correct tracking of their workflow. Pipelines are just perfect to deal with the usually huge bioinformatics tasks, that require a big amount of calculation, and several sorting and filtering steps. Despite the usual controversies about this, we can tell that Python is becoming the prime choice of many bioinformaticians, because of its powerful features, dynamic and populated community, and ease of use. That is why I think it is fair to discuss about a couple of python-based pipeline creation tools.

The first point, is to take stock of which main features we should ask to a python pipeline creation tool. Of course, anyone will appreciate a lightweight system for obvious reasons, and things like a simple syntax, scalability and the possibility to manage complex workflows with ease will be very welcome. Another aspect I want to mention, is the possibility to include previously created code into a pipeline system. There are two main reasons for this. First, functions and classes may be re-used in different projects, and having a pipeline system working as a “wrapper” around your code may ease this. Second, many python beginners are not really oriented to the pipeline-philosophy, as python works great with a module-based approach (even if  not exclusively).

The different solutions I got to find around, can be distinguished on the basis of their relationship with the code, and according on how they thread into a Python script. Let’s assume, for simplicity, that a program is made up by functions that are included into modules, and that several modules can constitute the whole thing. We have thus identified three concentric levels: a function level, a module level, and a multi-file level. Pipeline systems for Python are basically modules providing the possibility to include a simple-syntax code in your scripts to manage the data flow. Several commands, usually defined as decorators, are thus formalized to sort the data flow into an organized pipeline. So, we will discuss how different solution will work at different levels.

Pipelines working at function- level: Ruffus and Joblib

Published in 2010 on BMC Bioinformatics by Leo Goodstadt at Oxford University, Ruffus is available on its official website, where you can find a complete documentation and tutorials. As you can notice, Ruffus works by connecting consecutive input/output files, and imposes the developer to write the functions in the code following the order of the dataflow. Any function must be preceded by the @follows decorator, indicating the flow direction, and the @files, that calls the in/out files. That is why I mention it as a pipeline system working at a “funcion level”, as the internal module structure of a script depends on the structure of the pipeline.

This approach is someway related to the one implemented into Joblib, a python pipeline system that is mostly oriented to ease parallel computation. Despite the substantial differences, the structure of the script depends on the structure of the pipeline in both cases.

Pipelines working at module- level: Leaf

Leaf is a project published a couple of months ago by Francesco Napolitano, at the University of Salerno in Italy. The key-idea, is to provide a system to declare a pipeline structure without changing the code. At the beginning of the module, it is possible to enclose a decorator to build a graphical scheme of the pipeline you have in mind. A simple visual language, the Leaf Graphical Language, is implemented to graphically build the dependencies, with the possibility to export all the workflow as hypertext to share results. Leaf comes out as python library and can be downloaded here.

The key differences between Ruffus and Leaf are shown in the following picture (Napolitano et al., 2010).

1471-2105-14-201-5

As evident, Leaf works as a real wrapper, whereas Ruffus requires a specific script structure.

Pipelines working at multi-file level.

Pipelines can be designed to interconnect different python modules. In this case, the pipeline tool will work at an “upper level”, standing above different modules. It is the philosophy underlying the most common pipeline creation software, and I would like to mention Bpipe, that is one of the most recently developed (but there are quite a lot around). Of course, as scripts in any language can work with standard streams, we are slipping a bit away from the range of “python-dedicated pipeline tools”, and learning the good ol’ GNU-make is still worthwile if you are keen to work (or in the need of working) with pipelines at a module-level.

I cannot really tell which is the best one, since the choice  will depend on the project, the coder’s attitude and the specific needs. Furthermore, this post is just rattling off some few projects I got to find around, and more suggestions will be just welcome.