Sunday, December 2, 2012

Allow root to use SSH

Allowing direct root access over ssh is a security risk. However following steps will allow you to login as root over ssh session:

Open sshd_config file:
# vi /etc/ssh/sshd_config

Find out line that read as follows:
PermitRootLogin no

Edit the file and set it as follows (changing "no" for "yes"):
PermitRootLogin yes

Find out line that read as follows (this line may not exists in your configuration):
DenyUsers root user2 user3

Set is as follows (removing "root" from the listed users):
DenyUsers user2 user3

Save and close the file. Restart the sshd:
# /etc/init.d/sshd restart

That will do the job and you'll be able to login as root via ssh. Please remember to set up your config as before, if this is a temporary fix because as I said before, this is a risk configuration and it's not recommended at all.

Monday, November 26, 2012

6 Stages of Linux Boot Process (Startup Sequence)

Have you ever wondered what happens behind the scenes from the time you press the power button until the Linux login prompt appears?

The following are the 6 high level stages of a typical Linux boot process.

1. BIOS (Basic Input/Output System)

- Performs some system integrity checks
- Searches, loads, and executes the boot loader program.
- It looks for boot loader in floppy, cd-rom, or hard drive. You can press a key (typically F12 or F2, but it depends on your system) during the BIOS startup to change the boot sequence.
- Once the boot loader program is detected and loaded into the memory, BIOS gives the control to it.
So, in simple terms BIOS loads and executes the MBR boot loader.

2. MBR (Master Boot Record)

- It is located in the 1st sector of the bootable disk. Typically /dev/hda, or /dev/sda
- MBR is less than 512 bytes in size and it has three components 1) Primary boot loader info in 1st 446 bytes 2) Partition table info in next 64 bytes 3) MBR validation check in last 2 bytes.
- It contains information about GRUB (or LILO in other systems).
So, in simple terms MBR loads and executes the GRUB boot loader.

3. GRUB (Grand Unified Bootloader)

- If you have multiple kernel images installed on your system, you can choose which one to be executed.
- GRUB displays a splash screen, waits for few seconds, if you don’t enter anything, it loads the default kernel image as specified in the grub configuration file.
- GRUB has the knowledge of the filesystem (the older Linux loader LILO didn’t understand filesystem).
- GRUB configuration file is /boot/grub/grub.conf (/etc/grub.conf is a link to this). The following is a sample grub.conf of CentOS.
title CentOS (2.6.18-194.el5PAE)
          root (hd0,0)
          kernel /boot/vmlinuz-2.6.18-194.el5PAE ro root=LABEL=/
          initrd /boot/initrd-2.6.18-194.el5PAE.img
As you notice from the above info, it contains kernel and initrd image.
So, in simple terms GRUB just loads and executes Kernel and initrd images.

4. Kernel

- Mounts the root file system as specified in the “root=” in grub.conf
- Kernel executes the /sbin/init program
- Since init was the 1st program to be executed by Linux Kernel, it has the process id (PID) of 1. Do a ‘ps -ef | grep init’ and check the pid.
- initrd stands for Initial RAM Disk.
- initrd is used by kernel as temporary root file system until kernel is booted and the real root file system is mounted. It also contains necessary drivers compiled inside, which helps it to access the hard drive partitions, and other hardware.

5. Init

Looks at the /etc/inittab file to decide the Linux run level.
Following are the available run levels
0 – halt
1 – Single user mode
2 – Multiuser, without NFS
3 – Full multiuser mode
4 – unused
5 – X11
6 – reboot
- Init identifies the default initlevel from /etc/inittab and uses that to load all appropriate program.
- Execute ‘grep initdefault /etc/inittab’ on your system to identify the default run level
If you want to get into trouble, you can set the default run level to 0 or 6. Since you know what 0 and 6 means, probably you might not do that.
Typically you would set the default run level to either 3 or 5.

6. Runlevel programs

- When the Linux system is booting up, you might see various services getting started. For example, it might say “starting sendmail …. OK”. Those are the runlevel programs, executed from the run level directory as defined by your run level.
- Depending on your default init level setting, the system will execute the programs from one of the following directories.
Run level 0 – /etc/rc.d/rc0.d/
Run level 1 – /etc/rc.d/rc1.d/
Run level 2 – /etc/rc.d/rc2.d/
Run level 3 – /etc/rc.d/rc3.d/
Run level 4 – /etc/rc.d/rc4.d/
Run level 5 – /etc/rc.d/rc5.d/
Run level 6 – /etc/rc.d/rc6.d/
- Please note that there are also symbolic links available for these directory under /etc directly. So, /etc/rc0.d is linked to /etc/rc.d/rc0.d
- Under the /etc/rc.d/rc*.d/ directories, you would see programs that start with S and K.
- Programs starts with S are used during startup. S for startup.
- Programs starts with K are used during shutdown. K for kill.
- There are numbers right next to S and K in the program names. Those are the sequence number in which the programs should be started or killed.
For example, S12syslog is to start the syslog deamon, which has the sequence number of 12. S80sendmail is to start the sendmail daemon, which has the sequence number of 80. So, syslog program will be started before sendmail.

So in summary, that is what happens during the Linux boot process.

Thanks to

Friday, November 9, 2012


So, first things first: Using Crontab

- To see a listing of the current user's cronjobs, issue the following command:
root@anneke:~# crontab -l

This will produce, as standard output, something that resembles the following:
*/20 * * * * /home/squire/bin/rebuild-dns-zones
*/40 * * * * /home/squire/bin/delete-session-files >/dev/null 2>&1
*/10 * * * * rm /srv/*

In this example, cron:
Runs the rebuild-dns-zones script every twenty minutes.
Runs the delete-session-files script every forty minutes, and sends all output, including standard error, to /dev/null.
Deletes all files in the /srv/ directory every ten minutes.

To edit the current user's crontab file, issue the following command:
root@anneke:~# crontab -e

This will open a text editor and allow you to edit the crontab.

Basic cron Use

Entries in the crontab file come in a specific format. Each job is described on one and only one line. Each line begins with a specification of the interval, and ends with a command to be run at that interval.

cronjobs are executed with the default system shell, as if run from the command line prefixed with the following command:
/bin/sh -c

You can run any kind of script, command, or executable with cron.

Specifying Dates For cron

The syntax of crontab entries may be a bit confusing if you are new to cron. Each cron line begins with five asterisks:

* * * * *
These represent the interval of repetition with which tasks are processed. In order, the asterisks represent:

Day of month
Day of week

Minutes are specified as a number from 0 to 59. Hours are specified as numbers from 0 to 23. Days of the month are specified as numbers from 1 to 31. Months are specified as numbers from 1 to 12. Days of the week are specified as numbers from 0 to 7, with Sunday represented as either/both 0 and 7.

Special cron Operators

cron also provides a number of operators that allow you to specify more complex repetition intervals. They are:

The "/" operator "steps through" or "skips" a specified units. Therefore "*/3" in the hour field, will run the specified job, at 12:00 am, 3:00am, 6:00am, 9:00am, 12:00pm, 3:00pm, 6:00pm, and 9:00pm. A "*/3" in the "day of month" field, runs the given task on the 3rd, 6th, 9th, 12th, 15th, 18th, 21st, and 29th of every month.
The "," operator allows you to specify a list of times for repetition. Comma separated lists of times must not contain a space.
The "-" operator specifies a range of values. "2-4" in the month field will run a task in Feburary, March, and April. "1-5" in the day of week field will run a task every weekday.
Fields in crontab entries are separated by spaces. If you are using special cron operators, be particularly careful to avoid unintentional spaces in your command.

Special cron Syntax

There are also a number of special cron schedule shortcuts that you can use to specify common intervals to cron. These are specified on the crontab entry in place of the conventional five column date specification. These special interval statements are:

@yearly and @annually both run the specified task every year at 12:00am on the 1st of January. This equivalent to specifying "0 0 1 1 *" on the crontab line.
@daily and @midnight both run the cronjob every day at 12:00am. This is equivalent to the following cron syntax: "0 0 * * *".
@monthly runs the job once a month, on the 1st, at 12:00am. In standard cron syntax this is equivalent to: "0 0 1 * *".
@weekly runs the job once a week at 12:00am on Sunday. This is the same as specifying "0 0 * * 0" on the crontab line.
@hourly runs the job at the top of every hour. In standard cron syntax this is equivalent to: "0 * * * *".
The @reboot statement runs the specified command once, at start up. Generally boot-time tasks are managed by scripts in the /etc/inittab.d files, but @reboot cronjobs may be useful for users who don't have access to edit the init scripts.

Examples of crontab entries

File excerpt:crontab
45 16 1,15 * * /opt/bin/payroll-bi-monthly
45 4 * * 5 /opt/bin/payroll-weekly

In the first example, the /opt/bin/payroll-bi-monthly application is run at 4:45pm (45 16), on the 1st and 15th of every month (1,15). In the second example the /opt/bin/payroll-weekly is run at 4:45am (45 4) every Friday (4).

File excerpt:crontab
1 0 * * * /opt/bin/cal-update-daily
1 0 */2 * * /opt/bin/cal-update

These cronjobs will both run at 12:01am (1 0). The cal-update-daily job will run every day. The cal-update job will run will run every other day.

File excerpt:crontab
*/20 * * * * /home/squire/bin/rebuild-dns-zones
30 */2 * * * /opt/bin/backup-static-files
0 * * * * /opt/bin/compress-static-files
@hourly /opt/bin/compress-static-files

In the first example, the rebuild-dns-zones script runs every twenty minutes. In the second example, the backup-static-files program runs at 30 past the hour, (i.e. the "bottom of the hour") every other hour. In the final two examples, the compress-static-files script runs at the beginning of every hour.

Advanced cron Use

As cron is simply a tool for scheduling jobs, it can be used in a number of different applications and situations to accomplish a wide variety of tasks. Consider the following possibilities:

Running Jobs as Other Users

You can use cron to regularly run tasks as another user on the system. With root access, issue the following command:
root@anneke:~# crontab -u www-data -e

This will allow you to edit the crontab for the www-data user. You can run cronjobs as the root user, or as any user on the system. This is useful if you want to restrict the ability of a script to write to certain files. While the ability to run jobs as system users is extremely powerful, it can sometimes be confusing to manage a large number of crontab files dispersed among a number of system users. Also, carefully consider the security implications of running a cronjob with more privileges than is required.

Redirecting Job Output

By default, cron will send email to the executing user's email box with any output or errors that would normally be sent to the standard output or standard error. If you don't care about the standard output, you can redirect this to /dev/null. Append the following to the end of the line in your crontab file:

This will only redirect output that is sent to "standard out," (e.g. stdout). If your script generates an error, cron will still send the error to your email. If you want to ignore all output, even error messages, append the following to the end of the line in your crontab file:

>/dev/null 2>&1
While this can clean up your email box of unwanted email, redirecting all output to /dev/null can cause you to miss important errors if something goes wrong and a cronjob begins to generate errors.

More Information:

Thanks again to Library Linode for so much provided knowledge  :)

Sunday, November 4, 2012

Show the List of Installed Packages on Ubuntu or Debian

The command we need to use to know the list of Installed Packages on Ubutu and/or Debis is dpkg –get-selections, which will give us a list of all the currently installed packages.

root@anneke:~# dpkg --get-selections
adduser                                         install
alsa-base                                       install
alsa-utils                                      install
apache2                                         install
apache2-mpm-prefork                             install
apache2-utils                                   install
apache2.2-common                                install
apt                                             install
apt-utils                                       install

The full list can be quite long, so it’s much easier to filter through grep to get results for the exact package you need. For instance, I wanted to see which mysql packages I had already installed through apt-get:

root@anneke:~# dpkg --get-selections | grep mysql
libdbd-mysql-perl install
libmysqlclient18:i386 install
mysql-client-5.5 install
mysql-client-core-5.5 install
mysql-common install
mysql-server install
mysql-server-5.5 install
mysql-server-core-5.5 install
php5-mysql install

For extra credit, you can find the locations of the files within a package from the list by using the dpkg -L command, such as:

root@anneke:~# dpkg -L php5-mysql

Hope this helps :) Cheers!

Sunday, October 28, 2012

HOWTO Install Gnome 3 Desktop Environment on Ubuntu

GNOME 3 (a.k.a Gnome Shell) is the next major version of the GNOME desktop. After many years of a largely unchanged GNOME 2.x experience, GNOME 3 brings a fresh look and feel with gnome-shell. If you just installed Ubuntu 12.10, this will guide you in getting Gnome 3 installed on your system. NOTE: This tutorial is not for previous versions of Ubuntu.

The user experience of GNOME 3 is largely defined by gnome-shell, which is a compositing window manager and desktop shell. It replaces the GNOME 2 desktop shell, which consisted of metacity, gnome-panel, notification-daemon and nautilus.

gnome-shell provides the top bar on the screen, which hosts the ‘system status’ area in the top right, a clock in the center, and a hot corner that switches to the so-called ‘overview’ mode, which provides easy access to applications and windows.

In gnome-shell, notifications are displayed in the ‘messaging area’ which is an automatically hiding bar at the bottom of the screen. This is also where integrated chat functionality is provided.

Since the requirements of gnome-shell on the graphics system may not be met by certain hardware / driver combinations, GNOME 3 also support a ‘fallback mode’ in which we run gnome-panel, metacity and notification-daemon instead of gnome-shell. Note that this mode is not a ‘Classic GNOME’ mode; the panel configuration will be adjusted to be similar to the shell.

The Fallback will be handled automatically by gnome-session, which will detect insufficient graphics capabilities and run a different session.

Install the Gnome 3 Developers PPA:

sudo add-apt-repository ppa:gnome3-team/gnome3
sudo apt-get update
sudo apt-get install gnome-shell gnome-session-fallback

Step Two: Check for any updates.

sudo apt-get update
sudo apt-get upgrade

Step Three: Disable auto-login in ‘User accounts’ first before changing sessions, otherwise it will keep defaulting back to Unity each time you reboot your system. Now log out and select “GNOME” at the LightDM login screen. You need to click on the the little gear looking icon next to where you type your password to switch your session to Gnome 3 Desktop Environment.

Step Four: Save anything you were working on at this point. Reload GNOME Shell (press ALT + F2 and enter “r” or log out and log back in).

Optional: Use GNOME Tweak Tool to easily enable/disable extensions or switch between GNOME Shell themes on the fly – there’s no need to restart GNOME Shell anymore.

Special Note: If for some reason you don’t want Gnome 3 after you install everything, because I found that some things didn’t work anymore after I installed Gnome 3 on Ubuntu 12.10, here is how you can revert everything back to the way it was before you used the above tutorial (just in case).

How to Uninstall Gnome 3:

sudo apt-get install ppa-purge
sudo ppa-purge ppa:gnome3-team/gnome3
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install ubuntu-desktop

Thanks to

Why Install Gnome 3 in Ubuntu if you already have Unity?

Here are some reasons that are worth to read:


Monday, October 22, 2012

Gnome Fallback on Ubuntu 12.10

For those who don’t really like Unity, and still long for the original Gnome 2.* Desktop look and feel, we can see that they have done a terrific job updating the Ubuntu Classic “Fallback” Desktop Environment in Ubuntu.

To install Gnome Fallback session DE, you will need to install “gnome-session-fallback” package using Ubuntu Software Center, Synaptic Package Manager (which no longer comes installed by default either in 12.04). Also, Ubuntu Tweak has been discontinued and has been replaced with MyUnity and YPPA Manager, which are very similar functioning applications.

Why switch from Unity Desktop Environment to something else?

To install Classic Ubuntu “Fallback” Desktop Environment:

sudo apt-get install gnome-session-fallback
sudo apt-get install indicator-applet-appmenu
sudo apt-get install gnome-tweak-tool

Then log out and select “GNOME Classic” at the LightDM login screen. IMPORTANT: You need to click on the the little-gear-looking-icon next to where you type your password to change your session to Ubuntu Classic “Fallback” session. And Gnome 3.x Desktop Themes can be added with gnome-tweak-tool.

Hope you find this useful!


Google Chrome Problem in Ubuntu

I just installed the new Ubuntu 12.10 on my laptop and when I installed google-chrome I started seeing this errors logging all along the terminal, which it wasn't so friendly at all:

ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream

So, the solution I've found on the internet was easy as running this command:

google-chrome --disable-bundled-ppapi-flash

After that just restart the browser and problem solved.

Hope that helps. Cheers!

Monday, October 15, 2012

Linux ate my RAM!!!

Have your ever thought that a Linux Server is eating your RAM? Well, don't panic, your RAM it's just fine.

Here are some answers to some sort of questions that maybe you're asking yourself about:

What's going on?
Linux is borrowing unused memory for disk caching. This makes it looks like you are low on memory, but you are not. Everything is fine!

Why is it doing this?
Disk caching makes the system much faster. There are no downsides, except for confusing newbies. It does not take memory away from applications in any way, ever.

What if I want to run more applications?
If your applications want more memory, they just take back a chunk that the disk cache borrowed. Disk cache can always be given back to applications immediate. You are not low on RAM!!!

Do I need more swap?
No, disk caching only borrows the ram that applications don't currently want. It will not use swap. If applications want more memory, they just take it back from the disk cache. They will not start swapping.

How do I stop Linux from doing this?
You can't disable disk caching. The only reason anyone ever wants to disable disk caching is because they think it takes memory away from their applications, which it doesn't. Disk cache makes applications load faster and run smoother, but it never ever takes memory away from them. Therefore, there's absolutely no reason to disable it!

Why does top and free say all my ram is used if it isn't?
This is just a misunderstanding of terms. Both you and Linux agree that memory taken by applications is "used", while memory that isn't used for anything is "free".

But what do you call memory that is both used for something and available for applications?
You would call that "free", but Linux calls it "used".

Memory that isYou'd call itLinux calls it
taken by applicationsUsedUsed
available for applications, and used for somethingFreeUsed
not used for anythingFreeFree

This "something" is what top and free calls "buffers" and "cached". Since your and Linux's terminology differs, you think you are low on ram when you're not.

How do I see how much free ram I really have?

Too see how much ram is free to use for your applications, run free -m and look at the row that says "-/+ buffers/cache" in the column that says "free". That is your answer in megabytes:

root@anneke:~ # free -m
             total       used       free     shared    buffers     cached
Mem:          1504       1491         13          0         91        764
-/+ buffers/cache:        635        869
Swap:         2047          6       2041
root@anneke:~ #

If you don't know how to read the numbers, you'll think the ram is 99% full when it's really just 42%.

Hope this helps to clear some confused minds, like was mine.

Thanks a lot to the knowledge of Gerardo A. and

Friday, September 28, 2012

Static IP Address Configuration

To configure the Internet Protocol version 4 (IPv4) properties of a network connection with a static IP address for servers running Linux operating systems, you need to update and/or edit the network configuration files.

These configuration files are located on each Linux based system, as follow:
RHEL / Red hat / Fedora / CentOS Linux: /etc/sysconfig/network-scripts/
Debian / Ubuntu Linux: /etc/network/interfaces

Example Setup:

IP address:
Hostname: anneke.ceren.sys
Domain name:
Gateway IP:
DNS Server IP # 1:
DNS Server IP # 2:
DNS Server IP # 3:

RHEL / Red hat / Fedora / CentOS Linux Static IP Configuration:

For static IP configuration you need to edit the following files. Edit /etc/sysconfig/network as follows:

# vi /etc/sysconfig/network

Edit /etc/sysconfig/network-scripts/ifcfg-eth0:

# vi /etc/sysconfig/network-scripts/ifcfg-eth0

# Intel Corporation 82573E Gigabit Ethernet Controller (Copper)


Edit /etc/resolv.conf and setup DNS servers:

# vi /etc/resolv.conf
search ceren.sys

Finally, you need to restart the networking service:
# /etc/init.d/network restart

No, we have to verify the new static ip configuration for eth0:
# ifconfig eth0
# route -n
# ping
# ping

Debian / Ubuntu Linux Static IP Configuration:

Edit /etc/hostname:

# vi /etc/hostname


Edit /etc/network/interfaces:

# vi /etc/network/interfaces

iface eth0 inet static

Edit /etc/resolv.conf and setup DNS servers:

# vi /etc/resolv.conf 
search ceren.sys

Finally, you need to restart the networking service:
# /etc/init.d/networking restart

Type the following commands to verify your new setup, enter:
# ifconfig eth0
# route -n
# ping

Special Thanks to UnixCraft

Friday, August 10, 2012

10 good UNIX usage habits

1) Make directory trees in a single swipe.

For example: Defining directory trees individually:
~ $ mkdir tmp
~ $ cd tmp
~/tmp $ mkdir a
~/tmp $ cd a
~/tmp/a $ mkdir b
~/tmp/a $ cd b
~/tmp/a/b/ $ mkdir c
~/tmp/a/b/ $ cd c
~/tmp/a/b/c $

It is so much quicker to use the -p option to mkdir and make all parent directories along with their children in a single command. It is worth your time to conscientiously pick up the good habit:

For example:
~ $ mkdir -p tmp/a/b/c

You can use this option to make entire complex directory trees, which are great to use inside scripts; not just simple hierarchies. For example:

Another example. Defining complex directory trees with one command:
~ $ mkdir -p project/{lib/ext,bin,src,doc/{html,info,pdf},demo/stat/a}

2) Change the path; do not move the archive.

Another bad usage pattern is moving a .tar archive file to a certain directory because it happens to be the directory you want to extract it in. You never need to do this. You can unpack any .tar archive file into any directory you like. Use the -C option when unpacking an archive file to specify the directory to unpack it in:

~ $ tar xvf -C tmp/a/b/c newarc.tar.gz

3) Combine your commands with control operators.

Run a command only if another command returns a zero exit status:
Use the && control operator to combine two commands so that the second is run only if the first command returns a zero exit status. In other words, if the first command runs successfully, the second command runs. If the first command fails, the second command does not run at all. 

For example:               
~ $ cd tmp/a/b/c && tar xvf ~/archive.tar

In this example, the contents of the archive are extracted into the ~/tmp/a/b/c directory unless that directory does not exist. If the directory does not exist, the tar command does not run, so nothing is extracted.

Run a command only if another command returns a non-zero exit status:
Similarly, the || control operator separates two commands and runs the second command only if the first command returns a non-zero exit status. In other words, if the first command is successful, the second command does not run. If the first command fails, the second command does run. This operator is often used when testing for whether a given directory exists and, if not, it creates one.

For example:                
~ $ cd tmp/a/b/c || mkdir -p tmp/a/b/c

A combined example of good habit #3: Combining commands with control operators:                
~ $ cd tmp/a/b/c || mkdir -p tmp/a/b/c && tar xvf -C tmp/a/b/c ~/archive.tar

4) Quote variables with caution:

Always be careful with shell expansion and variable names. It is generally a good idea to enclose variable calls in double quotation marks, unless you have a good reason not to. Similarly, if you are directly following a variable name with alphanumeric text, be sure also to enclose the variable name in curly braces ({}) to distinguish it from the surrounding text. Otherwise, the shell interprets the trailing text as part of your variable name -- and most likely returns a null value. 

For example: Quoting (and not quoting) a variable:
~ $ ls tmp/
a b
~ $ VAR="tmp/*"
~ $ echo $VAR
tmp/a tmp/b
~ $ echo "$VAR"
~ $ echo $VARa

~ $ echo "$VARa"

~ $ echo "${VAR}a"
~ $ echo ${VAR}a
~ $

5) Use escape sequences to manage long input:

You have probably seen code examples in which a backslash (\) continues a long line over to the next line, and you know that most shells treat what you type over successive lines joined by a backslash as one long line. However, you might not take advantage of this function on the command line as often as you can. The backslash is especially handy if your terminal does not handle multi-line wrapping properly or when your command line is smaller than usual (such as when you have a long path on the prompt). The backslash is also useful for making sense of long input lines as you type them, as in the following example:

For example: Using a backslash for long input
~ $ cd tmp/a/b/c || \
> mkdir -p tmp/a/b/c && \
> tar xvf -C tmp/a/b/c ~/archive.tar

Alternatively, the following configuration also works. For example: Using a backslash for long input
~ $ cd tmp/a/b/c \
>                 || \
> mkdir -p tmp/a/b/c \
>                    && \
> tar xvf -C tmp/a/b/c ~/archive.tar

However you divide an input line over multiple lines, the shell always treats it as one continuous line, because it always strips out all the backslashes and extra spaces.
Note: In most shells, when you press the up arrow key, the entire multi-line entry is redrawn on a single, long input line.

6) Group your commands together in a list

Most shells have ways to group a set of commands together in a list so that you can pass their sum-total output down a pipeline or otherwise redirect any or all of its streams to the same place. You can generally do this by running a list of commands in a subshell or by running a list of commands in the current shell.
Run a list of commands in a subshell
Use parentheses to enclose a list of commands in a single group. Doing so runs the commands in a new subshell and allows you to redirect or otherwise collect the output of the whole, as in the following example:

Example: Running a list of commands in a subshell:
~ $ ( cd tmp/a/b/c/ || mkdir -p tmp/a/b/c && \
> VAR=$PWD; cd ~; tar xvf -C $VAR archive.tar ) \
> | mailx admin -S "Archive contents"

In this example, the content of the archive is extracted in the tmp/a/b/c/ directory while the output of the grouped commands, including a list of extracted files, is mailed to the admin address.
The use of a subshell is preferable in cases when you are redefining environment variables in your list of commands and you do not want those definitions to apply to your current shell.
Run a list of commands in the current shell
Use curly braces ({}) to enclose a list of commands to run in the current shell. Make sure you include spaces between the braces and the actual commands, or the shell might not interpret the braces correctly. Also, make sure that the final command in your list ends with a semicolon, as in the following example:

Another example of good habit: running a list of commands in the current shell
~ $ { cp ${VAR}a . && chown -R guest.guest a && \
> tar cvf newarchive.tar a; } | mailx admin -S "New archive"

7) Use xargs outside of find

Use the xargs tool as a filter for making good use of output culled from the find command. The general precept is that a find run provides a list of files that match some criteria. This list is passed on to xargs, which then runs some other useful command with that list of files as arguments, as in the following example:

Example of the classic use of the xargs tool
~ $ find some-file-criteria some-file-path | \
> xargs some-great-command-that-needs-filename-arguments

However, do not think of xargs as just a helper for find; it is one of those underutilized tools that, when you get into the habit of using it, you want to try on everything, including the following uses.

Passing a space-delimited list
In its simplest invocation, xargs is like a filter that takes as input a list (with each member on a single line). The tool puts those members on a single space-delimited line:

Example of output from the xargs tool:
~ $ xargs
a b c
~ $

You can send the output of any tool that outputs file names through xargs to get a list of arguments for some other tool that takes file names as an argument, as in the following example:

Example of using of the xargs tool:
~/tmp $ ls -1 | xargs
December_Report.pdf README a archive.tar
~/tmp $ ls -1 | xargs file
December_Report.pdf: PDF document, version 1.3
a: directory
archive.tar: POSIX tar archive Bourne shell script text executable
~/tmp $

The xargs command is useful for more than passing file names. Use it any time you need to filter text into a single line:

Example: Using the xargs tool to filter text into a single line:
~/tmp $ ls -l | xargs
-rw-r--r-- 7 joe joe 12043 Jan 27 20:36 December_Report.pdf -rw-r--r-- 1 \
root root 238 Dec 03 08:19 README drwxr-xr-x 38 joe joe 354082 Nov 02 \
16:07 a -rw-r--r-- 3 joe joe 5096 Dec 14 14:26 archive.tar -rwxr-xr-x 1 \
joe joe 3239 Sep 30 12:40
~/tmp $

Be cautious using xargs!
Technically, a rare situation occurs in which you could get into trouble using xargs. By default, the end-of-file string is an underscore (_); if that character is sent as a single input argument, everything after it is ignored. As a precaution against this, use the -e flag, which, without arguments, turns off the end-of-file string completely.

8) Know when grep should do the counting -- and when it should step aside

Avoid piping a grep to wc -l in order to count the number of lines of output. The -c option to grep gives a count of lines that match the specified pattern and is generally faster than a pipe to wc, as in the following example:

Example: Counting lines with and without grep
~ $ time grep and tmp/a/longfile.txt | wc -l

real    0m0.097s
user    0m0.006s
sys     0m0.032s
~ $ time grep -c and tmp/a/longfile.txt

real    0m0.013s
user    0m0.006s
sys     0m0.005s
~ $ 

An addition to the speed factor, the -c option is also a better way to do the counting. With multiple files, grep with the -c option returns a separate count for each file, one on each line, whereas a pipe to wc gives a total count for all files combined.
However, regardless of speed considerations, this example showcases another common error to avoid. These counting methods only give counts of the number of lines containing matched patterns -- and if that is what you are looking for, that is great. But in cases where lines can have multiple instances of a particular pattern, these methods do not give you a true count of the actual number of instances matched. To count the number of instances, use wc to count, after all. First, run a grep command with the -o option, if your version supports it. This option outputs only the matched pattern, one on each line, and not the line itself. But you cannot use it in conjunction with the -c option, so use wc -l to count the lines, as in the following example:

Example: Counting pattern instances with grep
~ $ grep -o and tmp/a/longfile.txt | wc -l
~ $

In this case, a call to wc is slightly faster than a second call to grep with a dummy pattern put in to match and count each line (such as grep -c).

9) Match certain fields in output, not just lines

A tool like awk is preferable to grep when you want to match the pattern in only a specific field in the lines of output and not just anywhere in the lines.
The following simplified example shows how to list only those files modified in December:

Example: Using grep to find patterns in specific fields
~/tmp $ ls -l /tmp/a/b/c | grep Dec
-rw-r--r--  7 joe joe  12043 Jan 27 20:36 December_Report.pdf
-rw-r--r--  1 root root  238 Dec 03 08:19 README
-rw-r--r--  3 joe joe   5096 Dec 14 14:26 archive.tar
~/tmp $

In this example, grep filters the lines, outputting all files with Dec in their modification dates as well as in their names. Therefore, a file such as December_Report.pdf is matched, even if it has not been modified since January. This probably is not what you want. To match a pattern in a particular field, it is better to use awk, where a relational operator matches the exact field, as in the following example:

Example: Using awk to find patterns in specific fields
~/tmp $ ls -l | awk '$6 == "Dec"'
-rw-r--r--  3 joe joe   5096 Dec 14 14:26 archive.tar
-rw-r--r--  1 root root  238 Dec 03 08:19 README
~/tmp $

See Resources for more details about how to use awk.

10) Stop piping cats

A basic-but-common grep usage error involves piping the output of cat to grep to search the contents of a single file. This is absolutely unnecessary and a waste of time, because tools such as grep take file names as arguments. You simply do not need to use cat in this situation at all, as in the following example:

Example: Using grep with and without cat 
~ $ time cat tmp/a/longfile.txt | grep and

real    0m0.015s
user    0m0.003s
sys     0m0.013s
~ $ time grep and tmp/a/longfile.txt

real    0m0.010s
user    0m0.006s
sys     0m0.004s
~ $ 

This mistake applies to many tools. Because most tools take standard input as an argument using a hyphen (-), even the argument for using cat to intersperse multiple files with stdin is often not valid. It is really only necessary to concatenate first before a pipe when you use cat with one of its several filtering options.

Embrace good habits!

It is good to examine your command-line habits for any bad usage patterns. Bad habits slow you down and often lead to unexpected errors. Picking up these good habits is a positive step toward sharpening your UNIX command-line skills.

Monday, July 23, 2012

Changing timezone in RedHat

In order to change the timezone of your system you will need to access the file /etc/sysconfig/clock directly:


Note: If your system's BIOS has UTC set to true, then set UTC to true. If it has it set to false, set it to false. UTC in the configuration file must always reflect your BIOS settings.

In order to get the particular zone you wish to use you must associate ZONE with a file located in /usr/share/zoneinfo. It is wise to note the directory structure because if you need to set the timezone to that of Shanghai which is located in the Asia directory you will then have to set your ZONE variable to the following :
Or perhaps you need to set the timezone to that of East Brazil :
Finally save the file  /etc/sysconfig/clock and on next reboot the system will be set to the defined timezone.

For the time on the machine to reflect the change timezone we need to link the zoneinfo file to /etc/localtime. This can be done as follows :

If you are setting your timezone to "Brazil/East" link the following file to /etc/localtime :
# ln -sf /usr/share/zoneinfo/Brazil/East /etc/localtime

Now by typing the date command to display the time you should see if reflect the newly linked timezone :
 # date
Thu Sep 30 10:06:23 BRT 2004