Working with hadoop means working with the Hadoop File System (HDFS). Therefore, it is mandatory to read, write and delete files via command line. That can be quite difficult and exhausting when you are not familiar with the common unix and hadoop commands. To handle this task, I wrote a small application that is able to work with an HDFS running on Ubuntu.
So far, the application is able to:
- Read the HDFS in a treeview
- Upload / Download files to/from the local machine
- Delete files on the HDFS
- Create directories on the HDFS
When there is a need (and if I get enough good feedback 😉 ), I’ll add session management for several file systems as well as the function to start MapReduce jobs from the application (as it can be seen in the lower group box).
A download is about to follow soon!