6 most important and advanced linux commands for everyday use

For most users working with the command shell is a big part of the Linux operating system. To be honest, most modern Linux distros are quite user friendly and you can use it as an your everyday system with out even accessing the command shell. That does not mean you should not use it.

If you are new to Linux desktop environments then you should try the command shell at some point. Once you understand the flexibility and power that command shells provide, it is quite likely that you will prefer the command line to the graphical programs. Much of the everyday tasks are much simpler to execute from the command shell compared the graphical interface, at least that has been my experience.

For everyday use, there are several simple Linux shell commands that you could easily master. Once you get a hang of those, you can start trying out some of the more advanced commands. Advanced does not necessarily mean it is harder than others, it just means that it is probably used less and has a lot more options and flexibility.

Here are a list of some common commands and utilities which you can use along with the other everyday commands. All of these commands have a long, long list of command line options. It is virtually impossible to list all of them and the several thousands of different combinations that it can be used. I refer you to the manual pages for additional information.


find is a very useful utility that allows you to search, find and filter files and directories. You can exactly match and print files according to your requirements. Furthermore, once you have the exact list of files, you can execute specific commands to each of these files.

Let’s say you want to find all the files jpeg files inside a folder and its sub-folders with an extension of .jpeg and move them to another folder. You can use the following find command to do the same…

$ find /path/to/folder -type f -iname "*.jpeg" -exec mv '{}' /path/to/newfolder \;


grep is another powerful utility that can be used to search and find files. It can be used to not only search files by file names but also by the content of the files. When used it with the previous find command, it can find files with even the most complex of requirement.

A simple example or use case is say to find all text files inside the folder than has a particular word in its content. The following command should be able to do that.

$ grep -irnH 'word-to-search' /path/to/folder/*.txt

You can pipe the output of other commands through to grep, which means you can search through them just as you would through the contents of a file. Another important feature is that you can use regular expressions with grep to perform matches.

$ find . -type f -iname "*.java" -exec cat '{}' \; | grep -e "\/\/"

In the above example, we go through every java file in the folder and search for single line comments in them. This probably can be better done using the xargs or just the grep command, but then it is just an example.


The xargs is an unique utility that allows you to build and execute commands dynamically (kinda) from the standard input. You can pipe the output of a program to xargs. The first command line argument of xargs can be another Linux command. The output from the previous command is then passed as an argument to the latter command. It is probably best explained as an example…

First, lets’ see an example where we use want to find all text files in a folder that have the word index in its name. You can easily do it with just the find command as below.

$ find /path/to/folder -type f -iname "*index*"

You can also do this using grep. The example below is exactly the same as the one above, but uses the grep command to filter the output of the preceding find command.

$ find /path/to/folder -type f | grep -i index

Now, if you want to search the content of the file (and not just the filename) then you can use grep as well to do that, but you will need pipe the content of each matching file to the grep command. There are several different ways to do this, but here is one of the easiest way…

$ find /path/to/folder -type f -iname "*index*" | xargs grep -inH index

The above example will find all the files with index in its name, and then search through content of each of them and print out the lines in the file that contain the word index. So, to contrast the two examples without the xargs the grep will match against the filename while with xargs it will pass the filename as an argument to grep thus allowing grep to search the contents of the file.


cut is an easy to use line manipulation utility that allows you to prints parts of a string. It allows you to remove sections of a line or string so as to filter the output of other programs or text files.

This is quite useful when you want only part or sections of lines that match a particular criteria. Let’s cook up a wild use-case…say you want to parse through the several log files and pull out the log messages that are errors, and then print out only a part of that log message.

$ find . -type f -iname "*.log" | xargs grep -i -C1 exception | grep -i ".java" | cut -d"(" -f2

Above we find all log files (with an extension .log) in the current folder, then pass it through xargs and grep to search for exceptions. We further filter the output through another grep to print out only lines that have references to a .java class name in them. We then cut the output to print out just the classname of the file. Yeah, a convoluted use case but it has come handy for me on several occasions.


The following two are powerful commands are good for text manipulation. To be honest, you probably need to master just one of the two and get away with it as there is plenty of overlap between the functionalities of the two. I would recommend awk if you are to choose one.


awk or gawk (GNU Awk) is a pattern and processing language all in itself. You use the awk command to write small programs that can effectively match a pattern and then perform a subsequent action on the string or file inputs. A search and replace feature is a good (but simple) example of how this works, but it is way more powerful than that.

It is quite effective when you have to write quick one line programs that can quickly act on files or output of other programs. A simple search and replace example would look something like:

$ awk '{ gsub(/this/,"that"); print }'


sed is similar but not the same as awk. However, you can pretty much use it for similar functions and actions. Sed is a stream editor that can parse and transform text using a similar and compact transformation language.

A simple example of search and substituting the values in the text will have a syntax like this.

$ sed -e 's/this/that/g' filename.txt

As with the previous commands, you can pipe the output of another command to sed as well. Here we change the value /usr/local/bin to /my/bin/ and then change all forward slashes to colons….why? just because we can.

$ echo $PATH | sed 's@/usr/local/bin@/my/bin/@g' | sed 's@\/@::@g'