armadillo
Armadillo
C++ linear algebra library
Data61

    About
    Support
    Questions
    Documentation
    Speed
    Contact
    Download

  
Jump to:


Bug Reports

  • I found a possible bug in the code and/or documentation. Where do I report it ?
    See the Support page.



Licenses


Linking

  • I'm getting lots of unresolved symbols during compilation
    Use the CMake installer to install Armadillo as described in the README.txt file.
    Then link your programs with the armadillo run-time library:
    g++ prog.cpp -o prog -O2 -larmadillo
    

  • What does the Armadillo run-time library do ?
    The Armadillo run-time library is generated by the CMake installer and is a wrapper for all the relevant libraries present on your system, such as OpenBLAS, LAPACK, BLAS, ARPACK, SuperLU, ATLAS. On Linux-based systems it also provides a C++11 thread-safe random number generator.
  • How can I link directly with BLAS and LAPACK without using the Armadillo run-time library ?
    Define ARMA_DONT_USE_WRAPPER before including the armadillo header. For example:
      #define ARMA_DONT_USE_WRAPPER
      #include <armadillo>
      
    or, if you're using gcc or clang, you can declare the define directly on the command line:

      g++ prog.cpp -o prog -O2 -I /home/blah/armadillo-6.200.4/include -DARMA_DONT_USE_WRAPPER -lblas -llapack

    • On Mac OS X, replace -llapack -lblas with -framework Accelerate
    • You can link with high-speed OpenBLAS instead of standard BLAS by changing -lblas -llapack to -lopenblas -llapack

  • Can I use Armadillo as a pure template library ?
    Yes. See the answer to the previous question.

  • Can I use Armadillo without LAPACK and BLAS ?
    Basic functionality will be available (eg. matrix addition and multiplication), but things like eigen decomposition will not be. Matrix multiplication (mainly for big matrices) may not be as fast.
  • Where do I get LAPACK, BLAS, etc ?
    • For Linux-based systems (eg. Fedora and Ubuntu) pre-built OpenBLAS, LAPACK, BLAS and ATLAS packages are available. You need to explicitly install them before installing Armadillo. Make sure you also install the related development packages (which contain header files).
    • Mac OS X comes with the Accelerate framework, which is an optimised implementation of BLAS and LAPACK. The CMake installer can make use of the Accelerate framework by default.
    • For Windows systems, Armadillo comes with pre-compiled 64 bit versions of standard LAPACK and BLAS. See the download page for more info.
  • Can I use high-speed LAPACK and BLAS replacements (eg. OpenBLAS, Intel MKL, AMD ACML) ?
    Yes. The CMake installer should figure out they are available on your system. Otherwise, you can directly link with such libraries -- see answers to previous questions.



Speed

  • Is automatic SIMD vectorisation supported (eg. SSE2) ?
    Yes. As of version 3.900, elementary expressions (eg. matrix addition, multiplication by scalar) can be vectorised into SSE2 instructions when using GCC 4.7+ with -O3 optimisation. For example, compile your code using:
    g++ prog.cpp -o prog -O3 -larmadillo 
    
    To get further speedups (ie. to use SSE3, SSE4, or AVX instructions), or to enable SSE2 on 32 bit machines, add the -march=native option. For example:
    g++ prog.cpp -o prog -O3 -march=native -larmadillo
    
  • How fast is Armadillo's matrix multiplication ?
    Armadillo uses BLAS for matrix multiplication, meaning the speed is dependent on the implementation of BLAS. You can use high-speed BLAS replacements to obtain considerably higher performance, such as the multi-threaded (parallelised) OpenBLAS, or Intel MKL, or AMD ACML. Under Mac OS X, the Accelerate framework can be used. If no BLAS library is available, Armadillo will use its built-in matrix multiply, which is generally fast enough for small and medium sized matrices. See also how to use BLAS replacements.
  • How fast is Armadillo's eigen decomposition, matrix inversion, etc ?
    Armadillo uses LAPACK for various matrix decompositions and factorisations, meaning the speed is dependent on the implementation of LAPACK and/or BLAS. You can use high-speed LAPACK and BLAS replacements to obtain considerably higher performance, such as the multi-threaded OpenBLAS, or Intel MKL, or AMD ACML. Under Mac OS X, the Accelerate framework can be used. See also how to use LAPACK replacements.
  • Can I use Armadillo with a GPU or OpenCL to speed up large matrix multiplications?
    Yes. You can link with NVIDIA NVBLAS which is a GPU-accelerated implementation of BLAS, or with AMD ACML which will take advantage of GPUs through OpenCL.



Development

  • Who are the developers ?
    Lead development is done by Conrad Sanderson, located at the Data61 / NICTA Queensland laboratory.
  • Can you implement features on request ?
    Sorry, no. If you'd like to see a feature in Armadillo, please submit a patch. The contributed code must have accompanying tests and user documentation.



Features / Functions

  • I can't find my favourite function in the documentation. Where is it ?
    If it's not in the documentation, it doesn't exist. See also the answers to development questions.
  • I'm using an Armadillo package that comes with Ubuntu/Debian/Fedora/SUSE, and a lot of functions appear to be missing.
    Armadillo packages that come with Linux distributions can be outdated. You can manually upgrade to the latest version.
  • Can Armadillo make use of C++11 features ?
    Yes. Armadillo will enable extra features (such as move constructors) when a C++11 compiler is detected. You may need to explicitly enable C++11 mode in your compiler (eg. -std=c++11 in gcc & clang).
  • Can I use the C++11 auto keyword with Armadillo objects and/or expressions?
    Use of C++11 auto is not recommended with Armadillo objects and expressions. Armadillo has a template meta-programming framework which creates lots of short lived temporaries that are not handled by auto.
  • Is Armadillo a C++11 only library ?
    No. Armadillo will work with compilers supporting the older C++98 and C++03 standards, as well as the newer C++11 and C++14 standards.
  • Is there support for fixed size (static size) matrices ?
    Yes. See the documentation for advanced matrix constructors. Use of fixed size matrices can help the compiler to optimise. Use of fixed size matrices is in general recommended only for small matrices (eg. ≤ 10x10, or ≤ 100 elements).
  • Is there support for sparse matrices ?
    Yes. As of version 3.4, there is preliminary support for sparse matrices. Sparse matrices are stored in compressed sparse column format via the SpMat class. Furthermore, dense matrix multiplication and inversion involving diagonal matrices takes into account sparsity (in order to reduce computation).
  • Does Armadillo take into account possible aliasing ?
    Yes. Armadillo checks for aliasing wherever it's possible to do so. In normal usage of the library this means aliasing is always checked. However, if you're evil enough you can always construct an artificial case to defeat any alias checking mechanism; in particular, if you construct matrices using writeable auxiliary memory (externally managed memory), your code will be responsible for taking care of possible aliasing.
  • Is it possible to interface Armadillo with other libraries ?
    Yes. This can be done by creating matrices (or cubes) that use auxiliary memory, or by accessing elements through STL-style iterators, or by directly obtaining a pointer to matrix memory via the .memptr() function.
  • Is it possible to use Armadillo matrices with user-defined/custom element types ?
    Armadillo supports matrices with the following element types: float, double, std::complex<float>, std::complex<double>, short, int, long, and unsigned versions of short, int, long. Support for other types is beyond the scope of Armadillo.
  • Is it possible to use Armadillo from other languages ?
  • Is the API stable ?
    Yes, within each major version.

    Long answer:
    Armadillo's version number is A.B.C, where A is a major version, B is a minor version, and C is the patch level (indicating bug fixes).

    Within each major version (eg. 4.x), minor versions with an even number (ie. evenly divisible by two) are backwards compatible with earlier even minor versions. For example, code written for version 4.000 will work with version 4.100, 4.120, 4.200, etc. However, as each minor version may have more features (ie. API extensions) than earlier versions, code specifically written for version 4.100 doesn't necessarily work with 4.000.

    Experimental versions are denoted by an odd minor version number (ie. not evenly divisible by two), such as 4.199. Experimental versions are generally faster and/or have more functionality, but their APIs have not been finalised yet (though the likelihood of APIs changes is quite low).

    We don't like changes to existing APIs and strongly prefer not to break any user software. However, to allow evolution, we reserve the right to alter the APIs in future major versions of Armadillo while remaining backwards compatible in as many cases as possible (eg. version 5.x may have slightly different APIs than 4.x). In a rare instance the user API may need to be tweaked if a bug fix absolutely requires it.

    This policy is applicable to the APIs described in the documentation; it is not applicable to internal functions (ie. the underlying internal implementation details may change across consecutive minor versions)



Related Software & Libraries

  • Are there open source projects using Armadillo ?
    Yes. Examples:
       
      MLPACK fast machine learning library (classification, regression, clustering, etc)
      libpca principal component analysis library
      SmartGridToolbox Smart Grid simulation library
      SigPack C++ signal processing library
       
      foreground robust foreground estimation / background subtraction algorithm
      groupsac algorithm for geometric vision problems
      GRASTA low rank subspace object tracking
      background_est clean background estimation from cluttered scenes
       
      Vespucci tool for spectroscopic data analysis and imaging
      SMART+ analysis of mechanics of materials
      ERKALE quantum chemistry
      libdynamica numerical methods used in physics
      ECOC PAK error correcting output codes
       
      gplib C++ Gaussian process library
      AVRS acoustic virtual reality system
      bnp inference in a hierarchical Dirichlet process model
      KL1p compressed sensing / sparse coding
       
      Gadgetron medical image reconstruction
      molotov motif locator (genetics)
      OptGpSampler sampling genome-scale metabolic networks
      GStream genetics (SNP and CNV genotyping)
       
      Mantella analysis and solution of optimisation problems
      liger integrated optimisation environment
      GNSS-SDR global navigation satellite system receiver
      Flow123d simulator of underground water flow
       
      matlab2cpp tool for converting Matlab code to Armadillo based C++ code
      armanpy Armadillo bindings/interface to Python (NumPy)
      cyarma alternate Armadillo bindings/interface to Python (NumPy)
      ArmadilloJava Java based interfaces similar to Armadillo API
      RcppArmadillo bridge between R and Armadillo, on which 200+ other packages depend
  • How is Armadillo related to uBLAS (part of Boost) ?
    Armadillo and uBLAS use similar template techniques to handle multi matrix expressions. While uBLAS (currently) has more matrix types, Armadillo has considerably more accessible syntax (ie. easier to use). Furthermore, Armadillo provides an efficient wrapper to the LAPACK and ATLAS libraries, thereby providing functionality and machine-dependent optimisations not present in uBLAS (eg. matrix inversion).
  • How is Armadillo related to IT++ ?
    IT++ does not use delayed evaluation, thereby becoming inefficient (slow) for multi matrix expressions or when handling sub-matrices. Furthermore, IT++ is licensed under the GPL without any exceptions, meaning that your code becomes "infected" with the GPL -- this is an issue when developing proprietary/commercial applications. Lastly, Armadillo has a more thorough treatment of vectors.
  • How is Armadillo related to Newmat ?
    Newmat has (currently) more matrix types, but does not handle delayed evaluation as well. Newmat's handling of sub-matrices is also relatively slow. Newmat has no provision to use LAPACK or ATLAS, which affects speed.



Miscellaneous

  • Are there code examples ?
    See "examples/example1.cpp" which comes with the Armadillo archive. See also the code snippets within the documentation, including the short example program.
  • How can I do ... ?
    Check the documentation and/or the README.txt file that comes with the Armadillo archive.
  • Is it possible to plot Armadillo data directly from C++ ?
    Yes. Try gnuplot-cpp, gnuplot-iostream, scopemm.
  • What was the motivation for Armadillo ?
    Armadillo was originally developed as part of a NICTA computer vision R&D project, in order the provide a solid backbone for computationally intensive experimentation, while at the same time allowing for relatively painless transition of research code into production environments (ie. translation of Matlab code to C++). Previous development frameworks and libraries were unsuitable due to limitations in terms of speed, features, licensing, coherency, or being unnecessarily difficult to use.
  • Are there other open source projects associated with NICTA ?
    See OpenNICTA.com.au
  
  
sourceforge