Analyzing big datasets of genomic sequences: Fast and scalable collection of k-mer statistics

Raffaele Giancarlo, Simona Ester Rombo, Mara Sorella, Umberto Ferraro Petrillo, Giuseppe Cattaneo

Risultato della ricerca: Article

2 Citazioni (Scopus)

Abstract

Background: Distributed approaches based on the MapReduce programming paradigm have started to be proposed in the Bioinformatics domain, due to the large amount of data produced by the next-generation sequencing techniques. However, the use of MapReduce and related Big Data technologies and frameworks (e.g., Apache Hadoop and Spark) does not necessarily produce satisfactory results, in terms of both efficiency and effectiveness. We discuss how the development of distributed and Big Data management technologies has affected the analysis of large datasets of biological sequences. Moreover, we show how the choice of different parameter configurations and the careful engineering of the software with respect to the specific framework under consideration may be crucial in order to achieve good performance, especially on very large amounts of data. We choose k-mers counting as a case study for our analysis, and Spark as the framework to implement FastKmer, a novel approach for the extraction of k-mer statistics from large collection of biological sequences, with arbitrary values of k. Results: One of the most relevant contributions of FastKmer is the introduction of a module for balancing the statistics aggregation workload over the nodes of a computing cluster, in order to overcome data skew while allowing for a full exploitation of the underlying distributed architecture. We also present the results of a comparative experimental analysis showing that our approach is currently the fastest among the ones based on Big Data technologies, while exhibiting a very good scalability. Conclusions: We provide evidence that the usage of technologies such as Hadoop or Spark for the analysis of big datasets of biological sequences is productive only if the architectural details and the peculiar aspects of the considered framework are carefully taken into account for the algorithm design and implementation.
Lingua originaleEnglish
pagine (da-a)138-151
Numero di pagine14
RivistaBMC Bioinformatics
Volume20
Stato di pubblicazionePublished - 2019

Fingerprint

Genomics
Electric sparks
Statistics
Technology
MapReduce
Cluster computing
Bioinformatics
Computational Biology
Workload
Cluster Computing
Distributed Architecture
Information management
Algorithm Design
Scalability
Experimental Analysis
Data Management
Comparative Analysis
Software
Agglomeration
Large Data Sets

All Science Journal Classification (ASJC) codes

  • Structural Biology
  • Biochemistry
  • Molecular Biology
  • Computer Science Applications
  • Applied Mathematics

Cita questo

Analyzing big datasets of genomic sequences: Fast and scalable collection of k-mer statistics. / Giancarlo, Raffaele; Rombo, Simona Ester; Sorella, Mara; Ferraro Petrillo, Umberto; Cattaneo, Giuseppe.

In: BMC Bioinformatics, Vol. 20, 2019, pag. 138-151.

Risultato della ricerca: Article

Giancarlo, Raffaele ; Rombo, Simona Ester ; Sorella, Mara ; Ferraro Petrillo, Umberto ; Cattaneo, Giuseppe. / Analyzing big datasets of genomic sequences: Fast and scalable collection of k-mer statistics. In: BMC Bioinformatics. 2019 ; Vol. 20. pagg. 138-151.
@article{a030b745102d4b1999474c5da977c9bf,
title = "Analyzing big datasets of genomic sequences: Fast and scalable collection of k-mer statistics",
abstract = "Background: Distributed approaches based on the MapReduce programming paradigm have started to be proposed in the Bioinformatics domain, due to the large amount of data produced by the next-generation sequencing techniques. However, the use of MapReduce and related Big Data technologies and frameworks (e.g., Apache Hadoop and Spark) does not necessarily produce satisfactory results, in terms of both efficiency and effectiveness. We discuss how the development of distributed and Big Data management technologies has affected the analysis of large datasets of biological sequences. Moreover, we show how the choice of different parameter configurations and the careful engineering of the software with respect to the specific framework under consideration may be crucial in order to achieve good performance, especially on very large amounts of data. We choose k-mers counting as a case study for our analysis, and Spark as the framework to implement FastKmer, a novel approach for the extraction of k-mer statistics from large collection of biological sequences, with arbitrary values of k. Results: One of the most relevant contributions of FastKmer is the introduction of a module for balancing the statistics aggregation workload over the nodes of a computing cluster, in order to overcome data skew while allowing for a full exploitation of the underlying distributed architecture. We also present the results of a comparative experimental analysis showing that our approach is currently the fastest among the ones based on Big Data technologies, while exhibiting a very good scalability. Conclusions: We provide evidence that the usage of technologies such as Hadoop or Spark for the analysis of big datasets of biological sequences is productive only if the architectural details and the peculiar aspects of the considered framework are carefully taken into account for the algorithm design and implementation.",
author = "Raffaele Giancarlo and Rombo, {Simona Ester} and Mara Sorella and {Ferraro Petrillo}, Umberto and Giuseppe Cattaneo",
year = "2019",
language = "English",
volume = "20",
pages = "138--151",
journal = "BMC Bioinformatics",
issn = "1471-2105",
publisher = "BioMed Central",

}

TY - JOUR

T1 - Analyzing big datasets of genomic sequences: Fast and scalable collection of k-mer statistics

AU - Giancarlo, Raffaele

AU - Rombo, Simona Ester

AU - Sorella, Mara

AU - Ferraro Petrillo, Umberto

AU - Cattaneo, Giuseppe

PY - 2019

Y1 - 2019

N2 - Background: Distributed approaches based on the MapReduce programming paradigm have started to be proposed in the Bioinformatics domain, due to the large amount of data produced by the next-generation sequencing techniques. However, the use of MapReduce and related Big Data technologies and frameworks (e.g., Apache Hadoop and Spark) does not necessarily produce satisfactory results, in terms of both efficiency and effectiveness. We discuss how the development of distributed and Big Data management technologies has affected the analysis of large datasets of biological sequences. Moreover, we show how the choice of different parameter configurations and the careful engineering of the software with respect to the specific framework under consideration may be crucial in order to achieve good performance, especially on very large amounts of data. We choose k-mers counting as a case study for our analysis, and Spark as the framework to implement FastKmer, a novel approach for the extraction of k-mer statistics from large collection of biological sequences, with arbitrary values of k. Results: One of the most relevant contributions of FastKmer is the introduction of a module for balancing the statistics aggregation workload over the nodes of a computing cluster, in order to overcome data skew while allowing for a full exploitation of the underlying distributed architecture. We also present the results of a comparative experimental analysis showing that our approach is currently the fastest among the ones based on Big Data technologies, while exhibiting a very good scalability. Conclusions: We provide evidence that the usage of technologies such as Hadoop or Spark for the analysis of big datasets of biological sequences is productive only if the architectural details and the peculiar aspects of the considered framework are carefully taken into account for the algorithm design and implementation.

AB - Background: Distributed approaches based on the MapReduce programming paradigm have started to be proposed in the Bioinformatics domain, due to the large amount of data produced by the next-generation sequencing techniques. However, the use of MapReduce and related Big Data technologies and frameworks (e.g., Apache Hadoop and Spark) does not necessarily produce satisfactory results, in terms of both efficiency and effectiveness. We discuss how the development of distributed and Big Data management technologies has affected the analysis of large datasets of biological sequences. Moreover, we show how the choice of different parameter configurations and the careful engineering of the software with respect to the specific framework under consideration may be crucial in order to achieve good performance, especially on very large amounts of data. We choose k-mers counting as a case study for our analysis, and Spark as the framework to implement FastKmer, a novel approach for the extraction of k-mer statistics from large collection of biological sequences, with arbitrary values of k. Results: One of the most relevant contributions of FastKmer is the introduction of a module for balancing the statistics aggregation workload over the nodes of a computing cluster, in order to overcome data skew while allowing for a full exploitation of the underlying distributed architecture. We also present the results of a comparative experimental analysis showing that our approach is currently the fastest among the ones based on Big Data technologies, while exhibiting a very good scalability. Conclusions: We provide evidence that the usage of technologies such as Hadoop or Spark for the analysis of big datasets of biological sequences is productive only if the architectural details and the peculiar aspects of the considered framework are carefully taken into account for the algorithm design and implementation.

UR - http://hdl.handle.net/10447/390422

UR - http://www.biomedcentral.com/bmcbioinformatics/

M3 - Article

VL - 20

SP - 138

EP - 151

JO - BMC Bioinformatics

JF - BMC Bioinformatics

SN - 1471-2105

ER -