Commit c27973f4 authored by psimion's avatar psimion
Browse files

MAJ readme to v1.2

parent 543bb272
No preview for this file type
species_A.fasta species_A.L.fastq.gz species_A.R.fastq.gz
species_B.fasta species_B.L.fastq species_B.R.fastq
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -5,6 +5,9 @@
CroCo : a program to detect and remove cross contamination in assembled transcriptomes
=================
#### **User Manual version 1.2**
<br /><br />
#### **Paul Simion, Khalid Belkhir, Clémentine François, Julien Veyssier, Jochen Rink, Michaël Manuel, Hervé Philippe, Max Telford**
1) Institut des Sciences de l'Evolution (ISEM), UMR 5554, CNRS, IRD, EPHE, Université de Montpellier, Montpellier, France
2) Max Plank Institute of Molecular Cell Biology and Genetics, Pfotenhauerstrasse 108, 01307 Dresden, Germany
......@@ -70,7 +73,7 @@ While more analyses will be needed in order to fully understand the limitations
If you use CroCo in your work, please cite:
Simion et al. BMC Biology (2018) 16:28 DOI 10.1186/s12915-018-0486-7
Simion P, Belkhir K, François C, Veyssier J, Rink JC, Manuel M, Philippe H, Telford MJ. [A software tool 'CroCo' detects pervasive cross-species contamination in next generation sequencing data](https://bmcbiol.biomedcentral.com/articles/10.1186/s12915-018-0486-7). BMC Biology (2018) 16:28 DOI 10.1186/s12915-018-0486-7
---
......@@ -304,8 +307,9 @@ CroCo will create a directory containing all results which will be placed within
```bash
Usage :
CroCo_v1.1.sh [--mode p|u] [--tool B|B2|K|R|S] [--fold-threshold INT] [--minimum-coverage FLOAT] [--threads INT] [--output-prefix STR] [--output-level 1|2] [--graph yes|no] [--trim5 INT] [--trim3 INT] [--frag-length FLOAT] [--frag-sd FLOAT] [--suspect-id INT] [--suspect-len INT] [--add-option STR] [--recat STR]
CroCo_v1.1.sh [--cnf configFile] [--mode p|u] [--tool B|B2|K|R|S] [--fold-threshold INT] [--minimum-coverage FLOAT] [--threads INT] [--output-prefix STR] [--output-level 1|2|3] [--graph yes|no] [--trim5 INT] [--trim3 INT] [--frag-length FLOAT] [--frag-sd FLOAT] [--suspect-id INT] [--suspect-len INT] [--add-option STR] [--recat STR] [--readclean yes|no]
--cnf configFile : a text filename containg a list of transcriptome assemblies to analyze and their associated fastq reads files [short: -k]
--mode p|u :\t\t\t'p' for paired and 'u' for unpaired (default : 'p') [short: -m]
--in STR :\t\t\tName of the directory containing the input files to be analyzed (DEFAULT : working directory) [short: -i]
--tool B|K|R :\t\t'B' for bowtie, 'K' for kallisto, 'R' for rapmap (DEFAULT : 'R') [short: -t]
......@@ -316,6 +320,7 @@ CroCo_v1.1.sh [--mode p|u] [--tool B|B2|K|R|S] [--fold-threshold INT] [--minimum
--output-prefix STR :\t\tPrefix of output directory that will be created (DEFAULT : empty) [short: -p]
--output-level 1|2 :\t\tSelect whether or not to output fasta files. '1' for none, '2' for all (DEFAULT : 2) [short: -l]
--graph yes|no :\t\tProduce graphical output using R (DEFAULT : no) [short: -g]
--readclean yes|no :\t\tSelect whether or not to output fastq files devoid of reads that mapped onto contaminant transcripts (DEFAULT : no) [short: -z]
--add-option 'STR' :\t\tThis text string will be understood as additional options for the mapper/quantifier used (DEFAULT : empty) [short: -a]
--recat SRT :\t\t\tName of a previous CroCo output directory you wish to use to re-categorize transcripts (DEFAULT : no) [short: -r]
--trim5 INT :\t\t\tnb bases trimmed from 5' (DEFAULT : 0) [short: -x]
......@@ -329,16 +334,16 @@ It is good practice to redirect information about each CroCo run into an output
'2>&1 | tee log_file'
Minimal working example :
CroCo_v1.1.sh --mode p 2>&1 | tee log_file
CroCo_v0.1.sh --cnf sampleconfig.txt --mode p 2>&1 | tee log_file
Exhaustive example :
CroCo_v1.1.sh --mode p --in data_folder_name --tool R --fold-threshold 2 --minimum-coverage 0.2 --overexp 300 --threads 8 --output-prefix test1_ --output-level 2 --graph yes --add-option '-v 0' --trim5 0 --trim3 0 --suspect-id 95 --suspect-len 40 --recat no 2>&1 | tee log_file
CroCo_v0.1.sh --cnf configFile --mode p --in data_folder_name --tool K --fold-threshold 2 --minimum-coverage 0.2 --overexp 300 --threads 8 --output-prefix test1_ --output-level 2 --graph yes --add-option '-v 0' --trim5 0 --trim3 0 --suspect-id 95 --suspect-len 40 --recat no --readclean no 2>&1 | tee log_file
Exhaustive example using shortcuts :
CroCo_v1.1.sh -m p -i data_folder_name -t R -f 2 -c 0.2 -d 300 -n 8 -p test1_ -l 2 -g yes -a '-v 0' -x 0 -y 0 -s 95 -w 40 -r no 2>&1 | tee log_file
CroCo_v0.1.sh -k configFile -m p -i data_folder_name -t K -f 2 -c 0.2 -d 300 -n 8 -p test1_ -l 2 -g yes -a '-v 0' -x 0 -y 0 -s 95 -w 40 -r no -z no 2>&1 | tee log_file
Example for re-categorizing previous CroCo results
CroCo_v1.1.sh -i data_folder_name -r previous_CroCo_results_folder_name -f 10 -c 0.5 -g yes 2>&1 | tee log_file
CroCo_v0.1.sh -k configFile -i data_folder_name -r previous_CroCo_results_folder_name -f 10 -c 0.5 -g yes 2>&1 | tee log_file
```
......@@ -361,32 +366,26 @@ docker run -t -i -P -v /home/user/where/data/are:/CroCoData:rw crocodock /bin/b
# Inputs
The transcriptomes and raw data to be analyzed must be present in a given directory indicated by the user with the option `--in`
Croco will scan a configuration file for a list of transcriptomes and raw reads to analyse .
The format is as follows:
NAME.fasta NAME.R1.fastq NAME.R2.fastq
NOM.fasta NOM.R1.fastq NOM.R2.fastq
The transcriptome fasta files and their corresponding fastq files to be analyzed must be present in a given directory specified by the user with the option `--in` [short: -i]. Croco will then look for a configuration file specified with the option `--cnf` [short: -k] to get the list of files to use. The format of this configuration file is as follows:
**for paired-end reads** :
NAME.fasta (assembled transcriptome seqs)
NAME.R1.fastq (raw illumina data LEFT)
NAME.R2.fastq (raw illumina data RIGHT)
**for paired-end reads**
species_A.fasta forward-reads-for-speciesA.fastq reverse-reads-for-speciesA.fastq
species_B.fasta forward-reads-for-speciesB.fastq reverse-reads-for-speciesB.fastq
**for unpaired reads** :
NAME.fasta (assembled transcriptome seqs)
NAME.fastq (raw illumina data)
species_A.fasta reads-for-speciesA.fastq
species_B.fasta reads-for-speciesB.fastq
The only naming convention are :
Things to remember when preparing input files :
* The transcriptome files must end with ".fasta"
* If the raw illumina reads are gziped, the extension must be ".gz"
* The columns of the config file must be tab-separated
* The columns of the config file must be tab-separated.
* If the raw illumina reads are gziped, the file extension must be ".gz".
* So far, gziped data are only handled by CroCo when using Kallisto (which is the default mapping tool).
* So far, it is not possible to analyze both single-end reads and paired-end reads in the same CroCo run.
Transcriptomes should be in fasta format. It is good practice to avoid as much as possible any special characters (e.g. ``\/[]()|:;``) in sequence names, as the tools used within CroCo might complain about them. Also, CroCo will temporarily cut names after the first encountered spacing character, so please be sure that the first part (i.e. the first word) of every sequence name is sufficient as a unique ID. This is to handle long sequence names resulting from some assembling softwares, or richly annotated sequences. Of course, CroCo outputs assemblies with the original full sequence names provided.
**Transciptome assemblies** must be in fasta format. The names of the fasta files will be used internally as ID for each sample. Do not use three-characters-long ID for your sample (e.g. AAX.fasta or ED2.fasta) as this will confuse BLAST. Also, it is good practice to avoid as much as possible any special characters (e.g. ``\/[]()|:;``) in sequence names within the assembled transcriptomes, as the tools used within CroCo might complain about them. Also, CroCo will temporarily cut names after the first encountered spacing character, so please be sure that the first part (i.e. the first word) of every sequence name is sufficient as a unique ID. This is to handle long sequence names resulting from some assembling softwares, or richly annotated sequences. Of course, CroCo outputs assemblies with the original full sequence names provided.
Reads fastq files should use Phred33 as quality score scheme, which is usually the case by default
**Reads fastq files** should use Phred33 as quality score scheme, which is usually the case by default
(If you are unsure in which quality score scheme your fastq files is encoded, see <https://en.wikipedia.org/wiki/FASTQ_format#Format>
or use this nice python script here <https://github.com/brentp/bio-playground/blob/master/reads-utils/guess-encoding.py>).
......@@ -408,7 +407,7 @@ For every sample, one `*.all` file is created. It contains the quantification of
Two files will also be created by CroCo :
- a `CroCo_summary` file which is the main output file containing various statistics on transcripts categorization for each samples
- a `CroCo_profiles` file which contains all sorted log2fold change values (computed between focal and most highly expressed alien samples). This file can be used to plot the log2fold change curves and thus visualize the distribution and certainty level of cross contamination events as well as potentialy refining threshold values.
- a `CroCo_profiles` file which contains all sorted log2fold change values (computed between focal and most highly expressed alien samples). This file can be used to plot the log2fold change curves and thus visualize the distribution and certainty level of cross contamination events. This could be use to potentialy refine threshold values.
---
......@@ -419,8 +418,8 @@ By default, five fasta files corresponding to the five transcript categories wil
Recommendations for downstream analyses :
- ONLY use the *clean* transcripts for direct qualitative analyses, such as phylogenetic inferences, or presence/absence observations. There is a slight risk of missing some correct data, but you won't miss rigour.
- use both *clean* and *low coverage* transcripts if downstream analyses can adequatly detect and circumvent potential cross contaminations (such as a refined orthology assignment step).
- if necessary, scavenge transcripts from *dubious* category on a case-by-case basis, always checking for their true origin (e.g. BLASTing them on NCBI non-redundant nucleotidic database is a good start). If still in doubt, discard them.
- use *overexpressed* category on a case-by-case basis, as they are strongly expressed in several samples, which means they might stem from highly conserved genes of which it might not be trivial to determine the exact taxonomical origin. They could also come from external contamination shared by several samples. Users might want to evaluate these transcripts with other tools, such as [Busco](http://busco.ezlab.org/). Note that if you only analyze two samples, no transcript will ever be categorized as *overexpressed* as it requires that the transcript is highly expressed in at least three samples.
- if necessary, scavenge transcripts from *dubious* category on a case-by-case basis, always checking for their true origin (e.g. BLASTing them onto NCBI non-redundant nucleotidic database or mapping them onto a genome). If still in doubt, discard them.
- use *overexpressed* category on a case-by-case basis. They are strongly expressed in several samples, which means they might stem from highly conserved genes of which it might not be trivial to determine the exact taxonomical origin. They could also come from external contamination shared by several samples. Users might want to evaluate these transcripts with other tools, such as [Busco](http://busco.ezlab.org/). Note that if you only analyze two samples, no transcript will ever be categorized as *overexpressed* as it requires that the transcript is highly expressed in at least three samples.
---
......@@ -438,24 +437,35 @@ firefox network_simplified.html &
```
Please note that if you want to move these html files (e.g. to store CroCo results elsewhere), you'll also need to move along their associated folders, respectively named *network_complete_files*, *network_simplified_files*, *network_dubious_complete_files* and *network_dubious_simplified_files* as the html files need them to display the networks!
---
**Clean reads**
If the option `--readclean` is set to `yes`, CroCo will map the reads onto their corresponding transcriptomes (using bowtie) and output cleaned fastq files in which all reads mapping onto contaminant transcripts will be excluded. Note that reads that mapped onto low coverage, dubious or over-expressed transcripts will be kept. Lastly, even if input read files were gziped, the cleaned fastq files will be written without compression.
------
# Detailed options
**--mode** (-m)
**--in** (-i)
This parameter specify if raw data is paired-end (`p`) or single-end (unpaired, `u`).
Don't forget to adjust the input file names accordingly : NAME.fastq for unpaired data, NAME.L.fastq + NAME.R.fastq for paired-end data.
This option allows the user to specify the directory containing the input files (transcriptomes and raw reads).
If not specified, CroCo will look for these files in the current directory.
IMPORTANT : if `--mode` is set to *unpaired* AND the tool used is *Kallisto*, then the options `--frag-length` and `--frag-sd` are required.
---
**--cnf** (-k)
This indicates the configuration file containing the names of the transcriptomic assemblies to be analyzed as well as the names of their corresponding read files. Each row corresponds to one sample, the first column is the transcriptome file name, column 2 is the read file name and column 3 corresponds to the second read file name (when using paired-end reads). See details in the [Inputs](#inputs) section above.
---
**--in** (-i)
**--mode** (-m)
This option allows the user to specify the directory containing the input files (transcriptomes and raw reads).
If not specified, CroCo will look for these files in the current directory.
This parameter specify if raw data is paired-end (`p`) or single-end (unpaired, `u`).
Don't forget to adjust the input file names accordingly : NAME.fastq for unpaired data, NAME.L.fastq + NAME.R.fastq for paired-end data.
IMPORTANT : if `--mode` is set to *unpaired* AND the tool used is *Kallisto*, then the options `--frag-length` and `--frag-sd` are required.
---
......@@ -475,11 +485,11 @@ to consider its query transcript *suspect* (default is `40` nucleotids).
**--tool** (-t)
This allows you to choose the mapper/quantifier to use from the following list : bowtie (`B`), kallisto (`K`), and rapmap (`R`, default).
This allows you to choose the mapper/quantifier to use from the following list : bowtie (`B`), kallisto (`K`, default), and rapmap (`R`).
IMPORTANT : These tools use different approaches to map and to quantify reads resulting in possibly very different levels of precision and speed. The accuracy of CroCo relies on the accuracy of the tool selected.
IMPORTANT : These tools use different approaches to map and to quantify reads resulting in possibly different levels of precision and speed. The accuracy of CroCo relies on the accuracy of the tool selected.
The decision to use RapMap as default is based on analyses of simulated data which suggested that using this mapping tool lead to higher accuracy when comparing samples closely-related.
The decision to use Kallisto as default is based both on the analyses of simulated data which suggested that using this mapping tool lead to high accuracy when comparing samples closely-related and on Kallisto capability to handle compressed read files.
---
......
No preview for this file type
VSEARCH: a versatile open source tool for metagenomics
Copyright (C) 2014-2017, Torbjorn Rognes, Frederic Mahe and Tomas Flouri
All rights reserved.
Contact: Torbjorn Rognes <torognes@ifi.uio.no>,
Department of Informatics, University of Oslo,
PO Box 1080 Blindern, NO-0316 Oslo, Norway
This software is dual-licensed and available under a choice
of one of two licenses, either under the terms of the GNU
General Public License version 3 or the BSD 2-Clause License.
GNU General Public License version 3
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
The BSD 2-Clause License
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
This diff is collapsed.
[![Build Status](https://travis-ci.org/torognes/vsearch.svg?branch=master)](https://travis-ci.org/torognes/vsearch)
# VSEARCH
## Introduction
The aim of this project is to create an alternative to the [USEARCH](http://www.drive5.com/usearch/) tool developed by Robert C. Edgar (2010). The new tool should:
* have open source code with an appropriate open source license
* be free of charge, gratis
* have a 64-bit design that handles very large databases and much more than 4GB of memory
* be as accurate or more accurate than usearch
* be as fast or faster than usearch
We have implemented a tool called VSEARCH which supports *de novo* and reference based chimera detection, clustering, full-length and prefix dereplication, rereplication, reverse complementation, masking, all-vs-all pairwise global alignment, exact and global alignment searching, shuffling, subsampling and sorting. It also supports FASTQ file analysis, filtering, conversion and merging of paired-end reads.
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps.
VSEARCH binaries are provided for x86-64 systems running GNU/Linux, macOS (version 10.7 or higher) and Windows (64-bit, version 7 or higher), as well as ppc64le systems running GNU/Linux.
VSEARCH can directly read input query and database files that are compressed using gzip and bzip2 (.gz and .bz2) if the zlib and bzip2 libraries are available.
Most of the nucleotide based commands and options in USEARCH version 7 are supported, as well as some in version 8. The same option names as in USEARCH version 7 has been used in order to make VSEARCH an almost drop-in replacement. VSEARCH does not support amino acid sequences or local alignments. These features may be added in the future.
## Getting Help
If you can't find an answer in the [VSEARCH documentation](https://github.com/torognes/vsearch/releases/download/v2.8.0/vsearch_manual.pdf), please visit the [VSEARCH Web Forum](https://groups.google.com/forum/#!forum/vsearch-forum) to post a question or start a discussion.
## Example
In the example below, VSEARCH will identify sequences in the file database.fsa that are at least 90% identical on the plus strand to the query sequences in the file queries.fsa and write the results to the file alnout.txt.
`./vsearch --usearch_global queries.fsa --db database.fsa --id 0.9 --alnout alnout.txt`
## Download and install
**Source distribution** To download the source distribution from a [release](https://github.com/torognes/vsearch/releases) and build the executable and the documentation, use the following commands:
```
wget https://github.com/torognes/vsearch/archive/v2.8.0.tar.gz
tar xzf v2.8.0.tar.gz
cd vsearch-2.8.0
./autogen.sh
./configure
make
make install # as root or sudo make install
```
You may customize the installation directory using the `--prefix=DIR` option to `configure`. If the compression libraries [zlib](http://www.zlib.net) and/or [bzip2](http://www.bzip.org) are installed on the system, they will be detected automatically and support for compressed files will be included in vsearch. Support for compressed files may be disabled using the `--disable-zlib` and `--disable-bzip2` options to `configure`. A PDF version of the manual will be created from the `vsearch.1` manual file if `ps2pdf` is available, unless disabled using the `--disable-pdfman` option to `configure`. Other options may also be applied to `configure`, please run `configure -h` to see them all. GNU autotools (version 2.63 or later) and the gcc compiler is required to build vsearch.
The IBM XL C++ compiler is recommended on ppc64le systems.
The Windows binary was compiled using the [Mingw-w64](https://mingw-w64.org/) C++ cross-compiler.
**Cloning the repo** Instead of downloading the source distribution as a compressed archive, you could clone the repo and build it as shown below. The options to `configure` as described above are still valid.
```
git clone https://github.com/torognes/vsearch.git
cd vsearch
./autogen.sh
./configure
make
make install # as root or sudo make install
```
**Binary distribution** Starting with version 1.4.0, binary distribution files containing pre-compiled binaries as well as the documentation will be made available as part of each [release](https://github.com/torognes/vsearch/releases). The included executables include support for input files compressed by zlib and bzip2 (with files usually ending in `.gz` or `.bz2`).
Binary distributions are provided for x86-64 systems running GNU/Linux, macOS (version 10.7 or higher) and Windows (64-bit, version 7 or higher), as well as ppc64le systems running GNU/Linux.
Download the appropriate executable for your system using the following commands if you are using a Linux x86_64 system:
```sh
wget https://github.com/torognes/vsearch/releases/download/v2.8.0/vsearch-2.8.0-linux-x86_64.tar.gz
tar xzf vsearch-2.8.0-linux-x86_64.tar.gz
```
Or these commands if you are using a Linux ppc64le system:
```sh
wget https://github.com/torognes/vsearch/releases/download/v2.8.0/vsearch-2.8.0-linux-ppc64le.tar.gz
tar xzf vsearch-2.8.0-linux-ppc64le.tar.gz
```
Or these commands if you are using a Mac:
```sh
wget https://github.com/torognes/vsearch/releases/download/v2.8.0/vsearch-2.8.0-macos-x86_64.tar.gz
tar xzf vsearch-2.8.0-macos-x86_64.tar.gz
```
Or if you are using Windows, download and extract (unzip) the contents of this file:
```
https://github.com/torognes/vsearch/releases/download/v2.8.0/vsearch-2.8.0-win-x86_64.zip
```
Linux and Mac: You will now have the binary distribution in a folder called `vsearch-2.8.0-linux-x86_64` or `vsearch-2.8.0-macos-x86_64` in which you will find three subfolders `bin`, `man` and `doc`. We recommend making a copy or a symbolic link to the vsearch binary `bin/vsearch` in a folder included in your `$PATH`, and a copy or a symbolic link to the vsearch man page `man/vsearch.1` in a folder included in your `$MANPATH`. The PDF version of the manual is available in `doc/vsearch_manual.pdf`.
Windows: You will now have the binary distribution in a folder called `vsearch-2.8.0-win-x86_64`. The vsearch executable is called `vsearch.exe`. The manual in PDF format is called `vsearch_manual.pdf`.
**Documentation** The VSEARCH user's manual is available in the `man` folder in the form of a [man page](https://github.com/torognes/vsearch/blob/master/man/vsearch.1). A pdf version ([vsearch_manual.pdf](https://github.com/torognes/vsearch/releases/download/v2.8.0/vsearch_manual.pdf)) will be generated by `make`. To install the manpage manually, copy the `vsearch.1` file or a create a symbolic link to `vsearch.1` in a folder included in your `$MANPATH`. The manual in both formats is also available with the binary distribution. The manual in PDF form ([vsearch_manual.pdf](https://github.com/torognes/vsearch/releases/download/v2.8.0/vsearch_manual.pdf)) is also attached to the latest [release](https://github.com/torognes/vsearch/releases).
## Plugins, packages, and wrappers
**QIIME 2 plugin** Thanks to the [QIIME 2](https://github.com/qiime2) team, there is now a plugin called [q2-vsearch](https://github.com/qiime2/q2-vsearch) for [QIIME 2](https://qiime2.org).
**Homebrew package** Thanks to [Torsten Seeman](https://github.com/tseemann), a [vsearch package](https://github.com/Homebrew/homebrew-science/pull/2409) for [Homebrew](http://brew.sh/) has been made.
**Debian package** Thanks to the [Debian Med](https://www.debian.org/devel/debian-med/) team, there is now a [vsearch](https://packages.debian.org/sid/vsearch) package in [Debian](https://www.debian.org/).
**Galaxy wrapper** Thanks to the work of the [Intergalactic Utilities Commission](https://wiki.galaxyproject.org/IUC) members, vsearch is now part of the [Galaxy ToolShed](https://toolshed.g2.bx.psu.edu/view/iuc/vsearch/).
## Converting output to a biom file for use in QIIME and other software
With the `from-uc`command in [biom](http://biom-format.org/) 2.1.5 or later, it is possible to convert data in a `.uc` file produced by vsearch into a biom file that can be read by QIIME and other software. It is described [here](https://gist.github.com/gregcaporaso/f3c042e5eb806349fa18).
Please note that vsearch version 2.2.0 and later are able to directly output OTU tables in biom 1.0 format as well as the classic and mothur formats.
## Implementation details and initial assessment
Please see the paper for details:
Rognes T, Flouri T, Nichols B, Quince C, Mahé F. (2016) VSEARCH: a versatile open source tool for metagenomics. PeerJ 4:e2584
doi: [10.7717/peerj.2584](https://doi.org/10.7717/peerj.2584)
## Dependencies
When compiling VSEARCH the header files for the following two optional libraries are required if support for gzip and bzip2 compressed FASTA and FASTQ input files is needed:
* libz (zlib library) (zlib.h header file) (optional)
* libbz2 (bzip2lib library) (bzlib.h header file) (optional)
On Windows these libraries are called zlib1.dll and bz2.dll.
VSEARCH will automatically check whether these libraries are available and load them dynamically.
To create the PDF file with the manual the ps2pdf tool is required. It is part of the ghostscript package.
## VSEARCH license and third party licenses
The VSEARCH code is dual-licensed either under the GNU General Public License version 3 or under the BSD 2-clause license. Please see LICENSE.txt for details.
VSEARCH includes code from several other projects. We thank the authors for making their source code available.
VSEARCH includes code from Google's [CityHash project](http://code.google.com/p/cityhash/) by Geoff Pike and Jyrki Alakuijala, providing some excellent hash functions available under a MIT license.
VSEARCH includes code derived from Tatusov and Lipman's DUST program that is in the public domain.
VSEARCH includes public domain code written by Alexander Peslyak for the MD5 message digest algorithm.
VSEARCH includes public domain code written by Steve Reid and others for the SHA1 message digest algorithm.
The VSEARCH distribution includes code from GNU Autoconf which normally is available under the GNU General Public License, but may be distributed with the special autoconf configure script exception.
VSEARCH may include code from the [zlib](http://www.zlib.net) library copyright Jean-loup Gailly and Mark Adler, distributed under the [zlib license](http://www.zlib.net/zlib_license.html).
VSEARCH may include code from the [bzip2](http://www.bzip.org) library copyright Julian R. Seward, distributed under a BSD-style license.
## Code
The code is written in C++ but most of it is actually mostly C with some C++ syntax conventions.
File | Description
---|---
**abundance.cc** | Code for extracting and printing abundance information from FASTA headers
**align.cc** | New Needleman-Wunsch global alignment, serial. Only for testing.
**align_simd.cc** | SIMD parallel global alignment of 1 query with 8 database sequences
**allpairs.cc** | All-vs-all optimal global pairwise alignment (no heuristics)
**arch.cc** | Architecture specific code (Mac/Linux)
**bitmap.cc** | Implementation of bitmaps
**chimera.cc** | Chimera detection
**city.cc** | CityHash code
**cluster.cc** | Clustering (cluster\_fast and cluster\_smallmem)
**cpu.cc** | Code dependent on specific cpu features (e.g. ssse3)
**db.cc** | Handles the database file read, access etc
**dbhash.cc** | Database hashing for exact searches
**dbindex.cc** | Indexes the database by identifying unique kmers in the sequences
**derep.cc** | Dereplication
**dynlibs.cc** | Dynamic loading of compression libraries
**eestats.cc** | Produce statistics for fastq_eestats command
**fasta.cc** | FASTA file parser
**fastq.cc** | FASTQ file parser
**fastqops.cc** | FASTQ file statistics etc
**fastx.cc** | Detection of FASTA and FASTQ files, wrapper for FASTA and FASTQ parsers
**kmerhash.cc** | Hash for kmers used by paired-end read merger
**linmemalign.cc** | Linear memory global sequence aligner
**maps.cc** | Various character mapping arrays
**mask.cc** | Masking (DUST)
**md5.c** | MD5 message digest
**mergepairs.cc** | Paired-end read merging
**minheap.cc** | A minheap implementation for the list of top kmer matches
**msa.cc** | Simple multiple sequence alignment and consensus sequence computation for clusters
**otutable.cc** | Generate OTU tables in various formats
**rerep.cc** | Rereplication
**results.cc** | Output results in various formats (alnout, userout, blast6, uc)
**search.cc** | Implements search using global alignment
**searchcore.cc** | Core search functions for searching, clustering and chimera detection
**searchexact.cc** | Exact search functions
**sha1.c** | SHA1 message digest
**showalign.cc** | Output an alignment in a human-readable way given a CIGAR-string and the sequences
**shuffle.cc** | Shuffle sequences
**sortbylength.cc** | Code for sorting by length
**sortbysize.cc** | Code for sorting by size (abundance)
**subsample.cc** | Subsampling reads from a FASTA file
**udb.cc** | UDB database file handling
**unique.cc** | Find unique kmers in a sequence
**userfields.cc** | Code for parsing the userfields option argument
**util.cc** | Various common utility functions
**vsearch.cc** | Main program file, general initialization, reads arguments and parses options, writes info.
**xstring.h** | Code for a simple string class
VSEARCH may be compiled with zlib or bzip2 integration that allows it to read compressed FASTA files. The [zlib](http://www.zlib.net/) and the [bzip2](http://www.bzip.org/) libraries are needed for this.
## Bugs
All bug reports are highly appreciated.
You may submit a bug report here on GitHub as an [issue](https://github.com/torognes/vsearch/issues),
you could post a message on the [VSEARCH Web Forum](https://groups.google.com/forum/#!forum/vsearch-forum)
or you could send an email to [torognes@ifi.uio.no](mailto:torognes@ifi.uio.no?subject=bug_in_vsearch).
## Limitations
VSEARCH is designed for rather short sequences, and will be slow when sequences are longer than about 5,000 bp. This is because it always performs optimal global alignment on selected sequences.
## Future work
Some issues to work on:
* testing and debugging
* heuristics for alignment of long sequences (e.g. banded alignment around selected diagonals)?
## The VSEARCH team
The main contributors to VSEARCH:
* Torbj&oslash;rn Rognes <torognes@ifi.uio.no> (Coding, testing, documentation, evaluation)
* Fr&eacute;d&eacute;ric Mah&eacute; <mahe@rhrk.uni-kl.de> (Documentation, testing, feature suggestions)
* Tom&aacute;&scaron; Flouri <tomas.flouri@h-its.org> (Coding, testing)
* Christopher Quince <c.quince@warwick.ac.uk> (Initiator, feature suggestions, evaluation)
* Ben Nichols <b.nichols.1@research.gla.ac.uk> (Evaluation)
## Acknowledgements
Special thanks to the following people for patches, suggestions, computer access etc:
* Davide Albanese
* Colin Brislawn
* Jeff Epler
* Christopher M. Sullivan
* Andreas Tille
* Sarah Westcott
## Citing VSEARCH
Please cite the following publication if you use VSEARCH:
Rognes T, Flouri T, Nichols B, Quince C, Mahé F. (2016) VSEARCH: a versatile open source tool for metagenomics. PeerJ 4:e2584.
doi: [10.7717/peerj.2584](https://doi.org/10.7717/peerj.2584)
Please note that citing any of the underlying algorithms, e.g. UCHIME, may also be appropriate.
## Test datasets
Test datasets (found in the separate vsearch-data repository) were
obtained from
the [BioMarks project](http://biomarks.eu/) (Logares et al. 2014),
the [TARA OCEANS project](http://oceans.taraexpeditions.org/) (Karsenti et al. 2011)
and the [Protist Ribosomal Database](http://ssu-rrna.org/) (Guillou et al. 2012).
## References
* Edgar RC (2010)
**Search and clustering orders of magnitude faster than BLAST.**
*Bioinformatics*, 26 (19): 2460-2461.
doi:[10.1093/bioinformatics/btq461](http://dx.doi.org/10.1093/bioinformatics/btq461)
* Edgar RC, Haas BJ, Clemente JC, Quince C, Knight R (2011)
**UCHIME improves sensitivity and speed of chimera detection.**
*Bioinformatics*, 27 (16): 2194-2200.
doi:[10.1093/bioinformatics/btr381](http://dx.doi.org/10.1093/bioinformatics/btr381)
* Guillou L, Bachar D, Audic S, Bass D, Berney C, Bittner L, Boutte C, Burgaud G, de Vargas C, Decelle J, del Campo J, Dolan J, Dunthorn M, Edvardsen B, Holzmann M, Kooistra W, Lara E, Lebescot N, Logares R, Mahé F, Massana R, Montresor M, Morard R, Not F, Pawlowski J, Probert I, Sauvadet A-L, Siano R, Stoeck T, Vaulot D, Zimmermann P & Christen R (2013)
**The Protist Ribosomal Reference database (PR2): a catalog of unicellular eukaryote Small Sub-Unit rRNA sequences with curated taxonomy.**
*Nucleic Acids Research*, 41 (D1), D597-D604.
doi:[10.1093/nar/gks1160](http://dx.doi.org/10.1093/nar/gks1160)
* Karsenti E, González Acinas S, Bork P, Bowler C, de Vargas C, Raes J, Sullivan M B, Arendt D, Benzoni F, Claverie J-M, Follows M, Jaillon O, Gorsky G, Hingamp P, Iudicone D, Kandels-Lewis S, Krzic U, Not F, Ogata H, Pesant S, Reynaud E G, Sardet C, Sieracki M E, Speich S, Velayoudon D, Weissenbach J, Wincker P & the Tara Oceans Consortium (2011)
**A holistic approach to marine eco-systems biology.**
*PLoS Biology*, 9(10), e1001177.
doi:[10.1371/journal.pbio.1001177](http://dx.doi.org/10.1371/journal.pbio.1001177)
* Logares R, Audic S, Bass D, Bittner L, Boutte C, Christen R, Claverie J-M, Decelle J, Dolan J R, Dunthorn M, Edvardsen B, Gobet A, Kooistra W H C F, Mahé F, Not F, Ogata H, Pawlowski J, Pernice M C, Romac S, Shalchian-Tabrizi K, Simon N, Stoeck T, Santini S, Siano R, Wincker P, Zingone A, Richards T, de Vargas C & Massana R (2014) **The patterning of rare and abundant community assemblages in coastal marine-planktonic microbial eukaryotes.**
*Current Biology*, 24(8), 813-821.
doi:[10.1016/j.cub.2014.02.050](http://dx.doi.org/10.1016/j.cub.2014.02.050)
* Rognes T (2011)
**Faster Smith-Waterman database searches by inter-sequence SIMD parallelisation.**
*BMC Bioinformatics*, 12: 221.
doi:[10.1186/1471-2105-12-221](http://dx.doi.org/10.1186/1471-2105-12-221)
This diff is collapsed.
vsearch-2.8.0-linux-x86_64/
vsearch-2.8.0-linux-x86_64/README.md
vsearch-2.8.0-linux-x86_64/LICENSE.txt
vsearch-2.8.0-linux-x86_64/LICENSE_GNU_GPL3.txt
vsearch-2.8.0-linux-x86_64/bin/
vsearch-2.8.0-linux-x86_64/bin/vsearch
vsearch-2.8.0-linux-x86_64/doc/
vsearch-2.8.0-linux-x86_64/doc/vsearch_manual.pdf
vsearch-2.8.0-linux-x86_64/man/
vsearch-2.8.0-linux-x86_64/man/vsearch.1
......@@ -3,7 +3,7 @@ function printUsage(){
echo "`basename $0` is a script to locally install dependencies for crosscontam. You can choose to install all dependencies or a specific one.
usage :
$0 --tool all|B|K|R|BL --os ubuntu|debian|fedora|centos|redhat|macosx
$0 --tool all|B|K|R|BL|V --os ubuntu|debian|fedora|centos|redhat|macosx
"
}
function printAndUsageAndExit(){
......@@ -70,11 +70,11 @@ while true; do
--tool)
shift;
if [ -n "$1" ]; then
if [[ "$1" == "all" ]] || [[ "$1" == "B" ]] || [[ "$1" == "K" ]] || [[ "$1" == "R" ]] || [[ "$1" == "BL" ]]; then
if [[ "$1" == "all" ]] || [[ "$1" == "B" ]] || [[ "$1" == "K" ]] || [[ "$1" == "R" ]] || [[ "$1" == "BL" ]] || [[ "$1" == "V" ]]; then
TOOL=$1
TOOLFLAG=1
else
printAndUsageAndExit "'$1' is an incorrect value for --tool option (all, B, K, R and BL are correct values)"
printAndUsageAndExit "'$1' is an incorrect value for --tool option (all, B, K, R, BL and V are correct values)"
fi
else