Building OpenCV 3.4 on Raspberry Pi

I’ve read many tutorials about building and installing OpenCV on a Raspberry Pi but they either did not work or were outdated. Hence, I decided to write down all steps required to build and install OpenCV 3.4.

 
# Update your system
sudo rpi-update
sudo apt-get update
sudo apt-get upgrade
 
# Change swap size
sudo nano /etc/dphys-swapfile
# Change this... CONF_SWAPSIZE=100 to this
CONF_SWAPSIZE=1024
 
# Restart service
sudo /etc/init.d/dphys-swapfile stop
sudo /etc/init.d/dphys-swapfile start
 
# Install packages
sudo apt-get install build-essential cmake cmake-curses-gui pkg-config
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install \
  libjpeg-dev \
  libtiff5-dev \
  libjasper-dev \
  libpng12-dev \
  libavcodec-dev \
  libavformat-dev \
  libswscale-dev \
  libeigen3-dev \
  libxvidcore-dev \
  libx264-dev \
  libgtk2.0-dev
 
sudo apt-get -y install libv4l-dev v4l-utils
 
sudo apt-get install python2.7-dev
sudo apt-get install python3-dev
 
pip install numpy
pip3 install numpy
 
# Install OpenCV
wget https://github.com/opencv/opencv/archive/3.4.0.zip -O opencv.zip
wget https://github.com/opencv/opencv_contrib/archive/3.4.0.zip -O opencv_contrib.zip
 
unzip opencv.zip
unzip opencv_contrib.zip
 
cd opencv-3.4.0
mkdir build
cd build
 
sudo cmake -D CMAKE_BUILD_TYPE=RELEASE \
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D BUILD_WITH_DEBUG_INFO=OFF \
    -D BUILD_DOCS=OFF \
    -D BUILD_EXAMPLES=OFF \
    -D BUILD_TESTS=OFF \
    -D BUILD_opencv_ts=OFF \
    -D BUILD_PERF_TESTS=OFF \
    -D INSTALL_C_EXAMPLES=OFF \
    -D INSTALL_PYTHON_EXAMPLES=ON \
    -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-3.4.0/modules \
    -D ENABLE_NEON=ON \
    -D WITH_LIBV4L=ON \
        ..
 
sudo make -j3
sudo make install
sudo ldconfig
 
# Fix library file name
cd /usr/local/lib/python3.5/dist-packages/
sudo mv cv2.cpython-35m.so cv2.so

Quickly executing multiple hbase commands from shell

We tend to setup our hbase test tables by shell scripts. Hence we have to execute multiple puts and deletes in a row. When you add an hbase shell [command] for each single operation you will have to wait a long time until all commands are processed. A more performant proceeding is to put all commands in one single <<EOF … EOF construct which basically allows to pass a multi line parameter to the shell.

hbase shell <<EOF
put 'shared:ITEMS','item0001','c:value','1.0'
put 'shared:ITEMS','item0002','c:value','2.0'
put 'shared:ITEMS','item0003','c:value','3.0'
put 'shared:ITEMS','item0004','c:value','4.0'
put 'shared:ITEMS','item0005','c:value','5.0'
put 'shared:ITEMS','item0006','c:value','6.0'
EOF

Doing so hbase will only start one shell instead of starting an instance for each single command.

Compiling the Alize speaker detection library in Visual Studio 2015

To compile the ALIZE speaker detection library download the projects alize-core and LIA_RAL from https://github.com/ALIZE-Speaker-Recognition. Unzip both in the same parent directory and rename the alize-core folder to ALIZE.

Open the SLN file in the ALIZE directory and build it, there should be no errors.

Afterwards open the SLN file in the LIA_RAL folder. Before you start the build process you have to make changes to two files. In Macros.h change line 301 from:

#if defined(_MSC_VER) && (!defined(__INTEL_COMPILER))

to:

#if defined(_MSC_VER) && (_MSC_VER < 1900) && (!defined(__INTEL_COMPILER))

Then change the following in MapBase.h. Add the following line after line 172 (before the keyword public:):

typedef MapBase<Derived, ReadOnlyAccessors> ReadOnlyMapBase;

Afterwards, change the following line:

Base::Base::operator=(other);

to:

ReadOnlyMapBase::Base::operator=(other);

And a few lines later change:

using Base::operator=;

to:

using ReadOnlyMapBase::Base::operator=;

Now open the SLN file in the LIA_RAL folder and build the project liatools. Make sure that you pick the same platform as you did before when compiling ALIZE.

ClassNotFoundException in Spark application using KryoSerializer

We frequently encoutered a ClassNotFoundException in our Java based Spark applications for classes that we verifiably included in our application’s JAR. Furthermore, we used the kryoSerializer (org.apache.spark.serializer.KryoSerializer) for performance reasons.

After some very annoying debugging sessions we found out that we can get rid of the exception by registering the apparently missing classes by adding them to the spark configration item org.apache.spark.serializer.KryoSerializer. This property is a simple comma separated list of full qualified class names. After adding each class the ClassNotFoundException disappeared.

Cross compiling OpenCV for ARM from Ubuntu

Similar to the compilation of dlib for ARM on Ubuntu (see http://www.jofre.de/?p=1494) I’d like to provide a short guilde on how to compile OpenCV for ARM on Ubuntu.

First, install all required packages:
sudo apt-get install build-essential git cmake pkg-config libjpeg8-dev libtiff5-dev libjasper-dev libpng12-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libgtk2.0-dev libatlas-base-dev
gfortran

Download/clone OpenCV from github

Build OpenCV
mkdir build
cd build
cmake -DSOFTFP=ON -DCMAKE_TOOLCHAIN_FILE=/home/user1/opencv/platforms/linux/arm-gnueabi.toolchain.cmake ..

make

Cross compiling dlib for ARM on Ubuntu

Since I could not find any instruction to compile dlib for ARM (to use it e.g. on a raspberry pi) I decided to write one.

First, install all required packages:
sudo apt-get install libc6-armel-cross libc6-dev-armel-cross binutils-arm-linux-gnueabi pkg-config libx11-dev libatlas-base-dev libgtk-3-dev libboost-all-dev build-essential cmake libncurses5-dev gcc-arm-linux-gnueabihf g++-arm-linux-gnueabihf

Download dlib and upzip:
wget http://dlib.net/files/dlib-19.8.zip
unzip dlib-19.8.zip

Compile:
cmake -DCMAKE_C_FLAGS=”-O3 -mfpu=neon -fprofile-use -DENABLE_NEON” -DNEON=ON -DCMAKE_C_COMPILER=/usr/bin/arm-linux-gnueabihf-gcc -DCMAKE_CXX_COMPILER=/usr/bin/arm-linux-gnueabihf-g++ -DCMAKE_CXX_FLAGS=”-std=c++11″ –build –config Release ..

make

sudo make install

Unit testing spark 2.x applications in Java

When you specific spark questions you might get the impression that java is the black sheep of the languages supported. Nearly all answers refer to scala or python. So it is with unit testing hence I am writing this post. I will show how to create a local context (what is pretty well documented) and how to read parquet files (or other formats like csv or json – the process is the same) from a source directory within your project. This way you can unit test your classes containing spark functions without connection to another file system or resource negotiator.

In the following example it is important to register the folder src/test/resources as class path / source code folder.

The annotations beforeClass and afterClass define methods that are called once the class is loaded the first respectively the last time.

package de.jofre.spark.tests;
 
import static org.assertj.core.api.Assertions.assertThat;
 
import org.apache.spark.SparkConf;
import org.apache.spark.ml.Pipeline;
import org.apache.spark.ml.feature.SQLTransformer;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import org.junit.experimental.categories.Category;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
 
import de.jofre.test.categories.UnitTest;
 
@Category(UnitTest.class)
public class SparkSessionAndFileReadTest {
	private static Dataset<Row> df;
	private static SparkSession spark;
	private static final Logger logger = LoggerFactory.getLogger(SparkSessionAndFileReadTest.class);
 
	@BeforeClass
	public static void beforeClass() {
		spark = SparkSession.builder().master("local[*]").config(new SparkConf().set("fs.defaultFS", "file:///"))
				.appName(SparkSessionAndFileReadTest.class.getName()).getOrCreate();
		df = spark.read().parquet("src/test/resources/tests/part-r-00001-myfile.snappy.parquet");
		logger.info("Created spark context and dataset with {} rows.", df.count());
	}
 
	@Test
	public void buildClippingTransformerTest() {
		logger.info("Testing the spark sorting function.");
		Dataset<Row> sorted = df.sort("id");
		assertThat(sorted.count()).isEqualTo(df.count());
	}
 
	@AfterClass
	public static void afterClass() {
		if (spark != null) {
			spark.stop();
		}
	}
}

Permanently add a proxy to MiKTex

MiKTeX is a Tex distribution that is required when translating latex documents to target formats like e.g. PDF. One task of such a distribution is the package management of plugins that are used in your document. MiKTeX downloads such packages when they are first used or updated and hence requires an internet connection as long as you do not have those packages on a portable medium.

If you sit behind a proxy – like me – you have to configure MiKTeX to use this proxy. What I did for a long (annoying) time was to enter the proxy URL in the MiKTeX Update tool and check Authentication required. This enforces the tool to ask you every single time to enter your proxy credentials. A better way is to uncheck Authentication required and specify user and password directly in the URL. If you do so, you should never be asked again to enter you user password combination.

Calculate prime numbers using spark

Hi, for a test I wrote a short java application that calculates prime numbers on a distributed spark cluster between 0 and 1000000. Since spark 2.x examples are rare on the internet I just leave this here. Prime number code is by Oscar Sanchez.

import org.apache.spark.api.java.function.MapFunction;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Encoders;
import org.apache.spark.sql.SparkSession;
import scala.Tuple2;
 
public class Main {
 
  private static boolean isPrime(long n) {
    for (long i = 2; 2 * i < n; i++) {
      if (n % i == 0) {
        return false;
      }
    }
    return true;
  }
 
  public static void main(String[] args) {
    SparkSession spark = SparkSession.builder().appName("PrimeApp").getOrCreate();
    Dataset<Tuple2<Long, Boolean>> rnd = spark.range(0L, 1000000L).map(
      (MapFunction<Long, Tuple2<Long, Boolean>>) x -> new Tuple2<Long, Boolean>(x, isPrime(x)), Encoders.tuple(Encoders.LONG(), Encoders.BOOLEAN()));
    rnd.show(false);
    spark.stop();
  }
 
}

Reading Parquet Files from a Java Application

Recently I came accross the requirement to read a parquet file into a java application and I figured out it is neither well documented nor easy to do so. As a consequence I wrote a short tutorial. The first task is to add your maven dependencies.

<dependencies>
 <dependency>
 <groupId>org.apache.parquet</groupId>
 <artifactId>parquet-hadoop</artifactId>
 <version>1.9.0</version>
 </dependency>
 <dependency>
 <groupId>org.apache.hadoop</groupId>
 <artifactId>hadoop-common</artifactId>
 <version>2.7.0</version>
 </dependency>
</dependencies>

To write the java application is easy once you know how to do it. Instead of using the AvroParquetReader or the ParquetReader class that you find frequently when searching for a solution to read parquet files use the class ParquetFileReader instead. The basic setup is to read all row groups and then read all groups recursively.

Continue reading “Reading Parquet Files from a Java Application”