Friday, December 19, 2014

HowTo Install Sun-Java on Debian Wheezy


For Debian Wheezy and later releases, Sun Java is no longer available in the repositories. However, java-package can be used to generate Debian packages from the upstream distributables of the JDK as provided by Oracle.

To create packages for DebianWheezy and later, do the following.

Process

1. Install the java-package package from contrib:

Add a "contrib" component to /etc/apt/sources.list, for example:

# Debian 7 "Wheezy"
deb http://http.debian.net/debian/ wheezy main contrib

Update the list of available packages and install the java-package package:
apt-get update && apt-get install java-package && exit

3. Download the appropriate JDK from http://www.oracle.com/technetwork/java/javase/downloads/index.html or chose an older version if your needs require you to run an insecure version

Download the desired Java JDK/JRE binary distribution (Oracle). Choose tar.gz archives or self-extracting archives, do not choose the RPM!

4. Use java-package to create a Debian package, for example:
make-jpkg jdk-7u45-linux-x64.tar.gz

5. Install the binary package created:
dpkg -i oracle-j2sdk1.7_1.7.0+update45_amd64.deb

Configuration

By default the Debian Alternatives will automatically install the best version of Java as the default version. If the symlinks have been manually set they will be preserved by the tools. The update-alternatives tools try hard to respect explicit configuration from the local admin. Local manual symlinks appear to be an explicit configuration. In order to reset the alternative symlinks to their default value use the --auto option.
update-alternatives --auto java

If you'd like to override the default to perhaps use a specific version then use --config and manually select the desired version.
update-alternatives --display java
update-alternatives --config java

Choose the appropriate number for the desired alternative.

The appropriate java binary will automatically be in PATH by virtue of the /usr/bin/java alternative symlink.

You may as well use the update-alternatives tool from java-common package which let you update all alternatives belonging to one runtime or development kit at a time.
update-java-alternatives -l
update-java-alternatives -s j2sdk1.7-oracle

Monday, December 15, 2014

HOWTO Convert a KVM Disk from RAW to QCOW2

The qemu-img convert command can do conversion between multiple formats, including raw, qcow2, VDI (VirtualBox), VMDK (VMware) and VHD (Hyper-V).

Why qcow2? QEMU copy-on-write format have a range of special features, including the ability to take multiple snapshots, smaller images on filesystems that don't support sparse files, optional AES encryption, and optional zlib compression)

Image format Argument to qemu-img
raw            raw
qcow2          qcow2
VDI (VirtualBox)vdi
VMDK (VMware) vmdk
VHD (Hyper-V) vpc

The following command will convert a KVM image from RAW to qcow2:
qemu-img convert -f raw -O qcow2 /home/vservers/server1/server1.img /home/vservers/server1/server1.qcow2

After the conversion, we can check the image with the following command:
ximena@xdev:~$ qemu-img info server1.qcow2
image: server1.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 1.5G
cluster_size: 65536

Monday, December 8, 2014

HOWTO MegaCli

Recently I've had to do some work with Dell PowerEdge servers and the LSI MegaRAID controllers.
MegaCli is available for Linux, DOS, Windows, Netware and Solaris. You can get it from LSI’s website www.lsi.com (search for MegaRAID SAS).

Inside the tarball or zip file you’ll find an .rpm archive which contains the MegaCli and MegaCli64 binaries (will be installed to /opt/MegaRAID/MegaCli).

Hare are some useful parameters:

Adapter parameter -aN
The parameter -aN (where N is a number starting with zero or the string ALL) specifies the PERC5/i adapter ID. If you have only one controller it’s safe to use ALL instead of a specific ID, but you’re encouraged to use the ID for everything that makes changes to your RAID configuration.

Physical drive parameter -PhysDrv [E:S]
For commands that operate on one or more pysical drives, the -PhysDrv [E:S] parameter is used, where E is the enclosure device ID in which the drive resides and S the slot number (starting with zero). You can get the enclosure device ID using "MegaCli -EncInfo -aALL". The E:S syntax is also used for specifying the physical drives when creating a new RAID virtual drive.

Virtual drive parameter -Lx
The parameter -Lx is used for specifying the virtual drive (where x is a number starting with zero or the string all).


Controller information
MegaCli -AdpAllInfo -aALL
MegaCli -CfgDsply -aALL
MegaCli -AdpEventLog -GetEvents -f events.log -aALL && cat events.log

Enclosure information
MegaCli -EncInfo -aALL

Virtual drive information
MegaCli -LDInfo -Lall -aALL

Physical drive information
MegaCli -PDList -aALL
MegaCli -PDInfo -PhysDrv [E:S] -aALL

Battery backup information
MegaCli -AdpBbuCmd -aALL


Controller management:

Silence active alarm
MegaCli -AdpSetProp AlarmSilence -aALL

Disable alarm
MegaCli -AdpSetProp AlarmDsbl -aALL

Enable alarm
MegaCli -AdpSetProp AlarmEnbl -aALL

Physical drive management

Set state to offline
MegaCli -PDOffline -PhysDrv [E:S] -aN

Set state to online
MegaCli -PDOnline -PhysDrv [E:S] -aN

Mark as missing
MegaCli -PDMarkMissing -PhysDrv [E:S] -aN

Prepare for removal
MegaCli -PdPrpRmv -PhysDrv [E:S] -aN

Replace missing drive
MegaCli -PdReplaceMissing -PhysDrv [E:S] -ArrayN -rowN -aN

The number N of the array parameter is the Span Reference you get using "MegaCli -CfgDsply -aALL" and the number N of the row parameter is the Physical Disk in that span or array starting with zero (it’s not the physical disk’s slot!).

Rebuild drive
MegaCli -PDRbld -Start -PhysDrv [E:S] -aN
MegaCli -PDRbld -Stop -PhysDrv [E:S] -aN
MegaCli -PDRbld -ShowProg -PhysDrv [E:S] -aN

Clear drive
MegaCli -PDClear -Start -PhysDrv [E:S] -aN
MegaCli -PDClear -Stop -PhysDrv [E:S] -aN
MegaCli -PDClear -ShowProg -PhysDrv [E:S] -aN

Bad to good (or back to good)
MegaCli -PDMakeGood -PhysDrv[E:S] -aN

This changes drive in state Unconfigured-Bad to Unconfigured-Good.

Walkthrough: Change/replace a drive

Set the drive offline, if it is not already offline due to an error
MegaCli -PDOffline -PhysDrv [E:S] -aN

Mark the drive as missing
MegaCli -PDMarkMissing -PhysDrv [E:S] -aN

Prepare drive for removal
MegaCli -PDPrpRmv -PhysDrv [E:S] -aN

Change/replace the drive
If you’re using hot spares then the replaced drive should become your new hot spare drive:
MegaCli -PDHSP -Set -PhysDrv [E:S] -aN

In case you’re not working with hot spares, you must re-add the new drive to your RAID virtual drive and start the rebuilding
MegaCli -PdReplaceMissing -PhysDrv [E:S] -ArrayN -rowN -aN
MegaCli -PDRbld -Start -PhysDrv [E:S] -aN

Please note: For a complete reference either call MegaCli -h or refer to the manual at: http://www.lsi.com/files/docs/techdocs/storage_stand_prod/sas/mr_sas_sw_ug.pdf

Friday, December 5, 2014

HP SPP (Service Pack for Proliant) Firmware Upgrade

How to Upgrade the Firmware of a Proliant Server with an ISO image of the Latest HP SPP Version 2014.09.0.


NOTE: Before start with the HP SPP Firmware Update is recommended to have the iLO Firmware up2date!!!

Latest HP SPP Version can be found under: HP Service Pack for Proliant (You need to registered to be able to Download the ISO and also, have a Valid Warranty linked to your profile)

First of all, here some Release Notes of HP SPP 2014.09.0 and 2014.06.0 Versions:

Release Summary (Version 2014.09.0):

Important Notes: This release no longer supports Red Hat Enterprise Linux 5 This release no longer supports ProLiant G5 and earlier platforms Release Summary: Added new support for the following HP ProLiant servers: HP BL460c Gen9 HP DL380 Gen9 HP DL360 Gen9 HP ML350 Gen9 HP DL180 Gen9 HP DL160 Gen9 HP XL230a Gen9 Added support for new HP ProLiant options Includes VMware driver support Provides operating system support for Red Hat Enterprise Linux 7 and vSphere 5.5 U2 Contains HP Smart Update Manager v7.1.0

Release Summary (Version 2014.06.0):

 The SPP was updated to address SSL/TLS MITM Vulnerability CVE-2014-0224 http://www.openssl.org/news/secadv_20140605.txt.
 This SPP is intended for use with HP OneView 1.10 and other supported Solutions from HP.

 Added support for:
  HP FlexFabric 20Gb 2-port 630FLB Adapter
  HP FlexFabric 20Gb 2-port 630M Adapter

 Contains:
  HP BladeSystem c-Class Virtual Connect Firmware, Ethernet plus 4/8Gb 20-port and 8Gb 24-port FC Edition Component v4.20(b)
  HP Smart Update Manager v6.4.1

Important Notes:
  This is the last SPP release to support G5 generation and earlier servers.
  This is the final SPP release that will contain support for Red Hat Enterprise Linux 5. Future SPP releases will not contain support for Red Hat Enterprise Linux 5

You can also find some Old Versions Under: HP Service Pack for ProLiant (SPP) - Version Archive

Instructions


1. Attach the HP SPP 2014.06.0 ISO CD in a Virtual Drive on the Server iLO.

2. Reboot the Server

3. Once the Server boot with the ISO, select the Option "Interactive Firmware Update Version 2014.06.0" or "Automatic Firmware Update Version 2014.06.0"
(In this HOWTO I will show the Steps with the Interactive option)


After that, we will wait the ISO to load and boot:


 And then:



4. Now we will find ourselves with a screen with 3 Options, and we will Select "Firmware Update"

5. After that, the iso will automatically start to performan Inventory for the whole SPP and also for the node itself:



6. After click "Next", we will see a Summary of the Components to be upgraded on the Server. For example:



Check what it's to be Upgraded and also take note of the versions current and new versions, to control later on after the Upgrade.

7. Once we are sure, we click on "Deploy" and the Firmware will start automatically to be upgraded. For example:



This process can take a while (really a LOOONG while), so be sure that the iLO doesn't hang and please always check the status of the process.

Once the Deployment is finished, our screen will look like:



8. Check the "Deployment Status" of all the Components. If everything is in "Sucess" status, you're good to go and Reboot the Server.

Don't forget to umount the ISO on the iLO!!!

Happy Upgrading! ;)

Wednesday, July 9, 2014

Linux Debian: HOWTO Add a Static Route

To add a route on a Linux Server (in our case, a Debian Server), you can run the following command on the command line:

Sintax:
route add -net $NET netmask $MASK gw $GATEWAY
route add -net 192.168.1.0 netmask 255.255.255.0 gw 192.168.1.254

To make this route persistent, we have to add it on the post-up command script on the file /etc/network/interfaces as follows:
post-up route add -net 192.168.1.0 netmask 255.255.255.0 gw 192.168.1.254

Example:
ximena@xdev:~$ cat /etc/network/interfaces 
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet static
        address 213.252.4.55
        netmask 255.255.255.128
        network 213.252.4.0
        broadcast 213.252.4.127
        gateway 213.252.4.33
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 213.252.1.1
        dns-search ipandmore.de
        post-up route add -net 192.168.1.0 netmask 255.255.255.0 gw 192.168.1.254

Save and close the file and then restart the networking service.

How to verify that the route was added?

sudo route -n
or
ip route show

Friday, July 4, 2014

HOWTO Fix postfix "File too large" error

After a Squeeze -> Wheezy Dist-Upgrade on our Webmail Server, I was getting the following errors whenever the Server would attempt to send an E-Mail larger than 10MB:

postdrop: warning: uid=33: File too large
sendmail: fatal: ximena@ximunix.org(33): message file too big

To solve this issue, the maximum outgoing message Size nneds to be incresead by adding the following line on your /etc/postfix/main.cf:

message_size_limit = 20480000

And restart your postfix:

service postfix restart

After doing this, the Mail Server should be able to send larger E-mails. :)

Friday, June 27, 2014

Rotate Puppet-Dashboard delayed_job.log

There was a time where the delayed_job.log of the Puppet-Dashboard grew up to 8GB, so I had to find a solution, and this is it:

Edit the file: /etc/logrotate.d/puppet-dashboard and add the following lines:

# Puppet-Dashboard logs:
/usr/share/puppet-dashboard/log/delayed_job.log {
  daily
  rotate 7
  missingok
  compress
  delaycompress
  notifempty
  copytruncate
}
root@server:~# 

The, the logfiles will look like:

root@server:~# l /usr/share/puppet-dashboard/log/
total 1043256
drwxr-xr-x  2 www-data www-data       4096 Jun 27 05:10 ./
drwxr-xr-x 18 root     root           4096 Jun  4  2013 ../
-rw-r--r--  1 root     root         481078 Jun 27 11:33 delayed_job.log
-rw-r--r--  1 root     root        1811627 Jun 27 05:10 delayed_job.log.1
-rw-r--r--  1 root     root          78097 Jun 26 05:10 delayed_job.log.2.gz
root@server:~# 

A note for "copytruncate":
Truncate  the  original log file in place after creating a copy, instead of moving the old log file and optionally creating a new one,  It  can be used when some program can not be told to close its logfile and thus might continue writing (appending)  to  the previous log file forever.  Note that there is a very small time slice between copying the file and truncating it, so  some  logging  data  might be lost.  When this option is used, the create option will have no effect, as the old log file stays in  place.

Hope it helps! ;)

UPDATED!!! Puppet Dashboard: All Tasks are "Pending" and all Nodes "Unresponsive"

Well it seems that the previous Solution, was a temporary one. :(
So I kept debugging a little bit and I found this error on the delayed_job log (/usr/share/puppet-dashboard/log/delayed_job.log) :

... 2014-06-13T12:05:40+0200: [Worker(delayed_job.1 host:pmaster01 pid:21516)] Report.create_from_yaml_file failed with ActiveRecord::StatementInvalid: Mysql::Error: Data too long for column 'details' at row 1: INSERT INTO `delayed_job_failure...

I "googled" a little bit and I found out this solution that I hope this time is a permanent one ;)

mysql> use dashboard_production;
mysql> describe delayed_job_failures;
+------------+--------------+------+-----+---------+----------------+
| Field      | Type         | Null | Key | Default | Extra          |
+------------+--------------+------+-----+---------+----------------+
| id         | int(11)      | NO   | PRI | NULL    | auto_increment |
| summary    | varchar(255) | YES  |     | NULL    |                |
| details    | text         | YES  |     | NULL    |                |
| read       | tinyint(1)   | NO   |     | 0       |                |
| created_at | datetime     | YES  |     | NULL    |                |
| updated_at | datetime     | YES  |     | NULL    |                |
| backtrace  | text         | YES  |     | NULL    |                |
+------------+--------------+------+-----+---------+----------------+
7 rows in set (0.00 sec)

mysql> 
mysql> ALTER TABLE delayed_job_failures MODIFY details BLOB;
Query OK, 0 rows affected (0.51 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> 

Also, we will need to change the Type on the "report_logs":

mysql> ALTER TABLE report_logs MODIFY message VARCHAR(65536);
Query OK, 46574 rows affected, 2 warnings (0.97 sec)
Records: 46574  Duplicates: 0  Warnings: 2

mysql> describe report_logs;
+-----------+--------------+------+-----+---------+----------------+
| Field     | Type         | Null | Key | Default | Extra          |
+-----------+--------------+------+-----+---------+----------------+
| id        | int(11)      | NO   | PRI | NULL    | auto_increment |
| report_id | int(11)      | NO   | MUL | NULL    |                |
| level     | varchar(255) | YES  |     | NULL    |                |
| message   | mediumtext   | YES  |     | NULL    |                |
| source    | text         | YES  |     | NULL    |                |
| tags      | text         | YES  |     | NULL    |                |
| time      | datetime     | YES  |     | NULL    |                |
| file      | text         | YES  |     | NULL    |                |
| line      | int(11)      | YES  |     | NULL    |                |
+-----------+--------------+------+-----+---------+----------------+
9 rows in set (0.00 sec)

mysql> 

;----

Previous Solution (19/05/2014):

I just update our Puppet Master Node to the latest Puppet Version: 3.6.0 and after the Upgrade, everything seemed fine, regardless some Ruby Warnings, until I noticed that ALL the Nodes on the Puppet Dashboard were listed as: "Unresponsive" and all Tasks where "Pending".
I have to confess, it took me a while to find a proper solution on Google, until I got it.

If this is your case, you can run the following Steps on the node where you're running the Puppet Dashboard:

1- Stop the Dashboard Workers

2- Execute these Steps:
cd /usr/share/puppet-dashboard/
l spool/
rm -v spool/*
rake jobs:clear RAILS_ENV=production

3- Start again the Dashboard Workers.

This is a Solution that at least for now, it worked for me.

I hope it helps!

Monday, May 26, 2014

HOWTO Change MySQL root Password

Method 1: Changing MySQL root user password using "mysqladmin"


To setup root password for first time, use mysqladmin command at shell prompt as follows:
root@server:~# mysqladmin -u root password NEWPASSWORD

However, if you want to change (or update) a root password, then you need to use the following command:
root@server:~# mysqladmin -u root -p'oldpassword' password newpass

For example, If the old password is abc, you can set the new password to 123456, enter:
root@server:~# mysqladmin -u root -p'abc' password '123456'

How do I verify that the new password is working or not?

Use the following mysql command:
root@server:~# mysql -u root -p'123456' db-name-here
# OR
root@server:~# mysql -u root -p'123456' -e 'show databases;'

A note about changing MySQL password for other users

To change a normal user password you need to type the following command:
root@server:~# mysqladmin -u user -p'old-password' password new-password

Method 2: Changing MySQL root user password using mysql command


This is an another method. You can directly update or change the password using the following method for the user called "test":

Login to mysql server, type the following command at shell prompt:
root@server:~# mysql -u root -p
mysql> use mysql;

Change password for user "test", enter:
mysql> update user set password=PASSWORD("NEWPASSWORD") where User='test';

Finally, reload the privileges:
mysql> flush privileges;
mysql> quit

Friday, May 16, 2014

HOWTO: Clean up /boot partition in Ubuntu

Normally, Ubuntu has a few Kernel Updates so, if you installed your Computer with a /boot Partition of 200MB it may get full and it will need some Cleaning Up task.

This one-liner that I've found will delete all the kernels that are not in use and it will just keep the one that is being used at the moment. It works like a charm!

dpkg --get-selections|grep 'linux-image*'|awk '{print $1}'|egrep -v "linux-image-$(uname -r)|linux-image-generic" |while read n;do apt-get -y remove $n;done

I hope it helps!

Thursday, May 15, 2014

Puppet: Generating password hashes

I became across the need of changing the Password of one of our Users on most all of our Servers, so we decided to build something in Puppet:.

For this task, we will use the Type USER and also the Type SCHEDULE wich is part of the Metaparameters. See: http://docs.puppetlabs.com/references/latest/metaparameter.html
With this "schedule" metaparameter, our Puppet Module will change the Password of the User once a day.
To generate a password hash to use within the Puppet Modules Manifests files we are going use the mkpasswd utility, which is available in the "whois" package (and it works!). In this case we will use Puppet’s "generate" function to call "mkpassword" and return the generated the hash version of the password.

So, our Manifest will look something like this:

$pass   = 'YOUR_PASSWORD_HERE'

schedule { 'everyday':
        period  => daily,
        range   => "8 - 18",
        repeat  => 1,
}

user { 'backup':
        name    => backup,
        ensure  => present,
        password => generate('/bin/sh', '-c', "mkpasswd -m sha-512 ${pass} | tr -d '\n'"),
        schedule => 'everyday',
}

This is an easy and effective way to make it work.

The Puppet Documentation says that we can use the built-in "sha1" function to generate a hash from a password, but sadly didn't work for me (maybe I'm to dumb to make it work), so I researched a bit and I found the Solution above.

As always, I hope this can help any lost soul around there. :)

Fixing MySQL replication after a faulty query

When you check your MySQL Slave Status and you have some of the following Statements:

mysql> SHOW SLAVE STATUS \G
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 1.2.3.4
                Master_User: slave_user
                Master_Port: 3306
              Connect_Retry: 60
            Master_Log_File: mysql-bin.001079
        Read_Master_Log_Pos: 269214454
             Relay_Log_File: slave-relay.000130
              Relay_Log_Pos: 100125935
      Relay_Master_Log_File: mysql-bin.001079
           Slave_IO_Running: Yes
          Slave_SQL_Running: No
            Replicate_Do_DB: mydb
        Replicate_Ignore_DB:
         Replicate_Do_Table:
     Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
                 Last_Errno: 1146
                 Last_Error: Error 'Table 'mydb.taggregate_temp_1212047760' doesn't exist' on query. Default database: 'mydb'. 
Query: 'UPDATE thread AS thread,taggregate_temp_1212047760 AS aggregate
        SET thread.views = thread.views + aggregate.views
        WHERE thread.threadid = aggregate.threadid'
               Skip_Counter: 0
        Exec_Master_Log_Pos: 203015142
            Relay_Log_Space: 166325247
            Until_Condition: None
             Until_Log_File:
              Until_Log_Pos: 0
         Master_SSL_Allowed: No
         Master_SSL_CA_File:
         Master_SSL_CA_Path:
            Master_SSL_Cert:
          Master_SSL_Cipher:
             Master_SSL_Key:
      Seconds_Behind_Master: NULL
1 row in set (0.00 sec)

mysql>

You will need to Repair your Replication. Fixing the problem is easy, but you need to be careful.
The Solution is to tell the slave to skip the invalid query, by running the following:

mysql> SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1; START SLAVE;

And then, we will check if the replication is working correctly again:

mysql> SHOW SLAVE STATUS \G
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 1.2.3.4
                Master_User: slave_user
                Master_Port: 3306
              Connect_Retry: 60
            Master_Log_File: mysql-bin.001079
        Read_Master_Log_Pos: 447560366
             Relay_Log_File: slave-relay.000130
              Relay_Log_Pos: 225644062
      Relay_Master_Log_File: mysql-bin.001079
           Slave_IO_Running: Yes
          Slave_SQL_Running: Yes
            Replicate_Do_DB: mydb
        Replicate_Ignore_DB:
         Replicate_Do_Table:
     Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
                 Last_Errno: 0
                 Last_Error:
               Skip_Counter: 0
        Exec_Master_Log_Pos: 447560366
            Relay_Log_Space: 225644062
            Until_Condition: None
             Until_Log_File:
              Until_Log_Pos: 0
         Master_SSL_Allowed: No
         Master_SSL_CA_File:
         Master_SSL_CA_Path:
            Master_SSL_Cert:
          Master_SSL_Cipher:
             Master_SSL_Key:
      Seconds_Behind_Master: 0
1 row in set (0.00 sec)

mysql>

Now you will also see that "Seconds Behind Master" is "0" and "Slave IO Running" and "Slave SQL Running" are set to "Yes" now.
Everything should run smoothly now :)

Friday, January 17, 2014

HOWTO list the contents of a tar, tar.gz or tar.bz2 file

"tar" is an archiving program designed to store and extract files from an archive file known as a tarfile. However, sometimes you need to list the contents of a tar or tar.gz file on screen before extracting all the files.

1. List the contents of a tar file:
ximena@anneke:~$ tar -tvf file.tar

2. List the contents of a tar.gz file:
ximena@anneke:~$ tar -ztvf file.tar.gz

3. List the contents of a tar.bz2 file
ximena@anneke:~$ tar -jtvf file.tar.bz2

Where:   
t: List the contents of an archive.
v: Verbose list files processed.
z: Filter the archive through gzip so that we can open compressed (decompress) .gz tar file
j: Filter archive through bzip2, use to decompress .bz2 files.
f filename: Use archive file called filename

Wednesday, January 8, 2014

Enable Slow Query Log in MySQL

How to enable slow query log in MySQL without restarting mysqld

In MySQL 5.1.12 and later, this can be done without restarting mysql as follows:

root@anneke:~# touch /var/log/mysql/mysql-slow.log
root@anneke:~# chown mysql:mysql /var/log/mysql/mysql-slow.log
root@anneke:~# mysql -e 'SET GLOBAL slow_query_log=1;'
root@anneke:~# mysql -e 'SET GLOBAL slow_query_log_file="/var/log/mysql/mysql-slow.log";'
root@anneke:~# mysql -e 'SET GLOBAL long_query_time=2;'


If you want to leave slow query logging on, you will need to add 3 lines to /etc/mysql/my.cnf under the [mysqld] section:
slow_query_log=1
slow_query_log_file=/var/log/mysql/mysql-slow.log
long_query_time=2


To test if it is working correctly, simply run a SELECT SLEEP(x) query where "x" is longer than the long_query_time you have set - the query should be logged to the file you specified for slow_query_log_file (as long as it exists and the mysql user can access it).

root@anneke:~# mysql -e 'SELECT SLEEP(5);'

Then you can look at the logfile and check id your "slow" query is there:
less /var/log/mysql/mysql-slow.log
or
tail -f /var/log/mysql/mysql-slow.log


Here is a summary of all of the different ways to turn on slow query logging in various versions of MySQL:
  • Before 5.1.6, start mysqld with the --log-slow-queries[=file_name] option.
  • In versions after MySQL 5.1.6, you can log a file or a table, or both. Start mysqld with the --log-slow-queries[=file_name] option and optionally use --log-output to specify the log destination.
  • After MySQL 5.1.12, use --slow_query_log[={0|1}] - 1 is on and the default slow query log file name is used.
  • After MySQL 5.1.29, use --slow_query_log[={0|1}] - the --log-slow-queries option is deprecated.
  • When using slow_query_log=1 (either in my.cnf or starting with mysqld --slow_query_log=1) use slow_query_log_file=/your/log/file or just leave it as the default ($hostname-slow.log in the mysql data folder).

You can find more information about slow query logging at the official MySQL Reference: http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html

Friday, January 3, 2014

HOWTO NFS Mount Using Exportfs

Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984, allowing a user on a client computer to access files over a network much like local storage is accessed.

Here are some tips for a daily use of NFS:

1. To export a directory to a remote machine, do the following:
exportfs REMOTEIP:PATH

where:
    REMOTEIP – IP of the remote server to which you want to export.
    PATH – Path of directory that you want to export.

For example:
root@anneke:~# exportfs 192.168.3.0/255.255.255.0:/nfs_export/test

2. To mount the remote file system on the local server, do the following:
mount REMOTEIP:PATH_ORIGIN PATH_DEST

For example:
root@anneke:~# mount 192.168.3.0/255.255.255.0:/nfs_export/test /mnt/test

where:
    REMOTEIP – IP of the remote server which exported the file system
    PATH_ORIGIN – Path of directory which you want to export.
    PATH_DEST - Path on the local server where you want to mount the export

3. Unmount Remote File System
You can un-mount the remote file system mounted on the local server using the normal umount PATH. For more option refer to umount command examples.

For example:
root@anneke:~# umount /mnt/test

4. Unexport the File System
You can check the exported file system as shown below.

root@anneke:~# exportfs
/nfs_export/test

                192.168.3.0/255.255.255.0

To unexport the file system, use the -u option as shown below:
exportfs -u REMOTEIP:PATH

For example:
root@anneke:~# exportfs 192.168.3.0/255.255.255.0:/nfs_export/test

After unexporting, check to make sure it is not available for NFS mount as shown below.

root@anneke:~# exportfs

5. Make NFS Export Permanent Across System Reboot
Export can be made permanent by adding that entry into /etc/exports file.

root@anneke:~# cat /etc/exports
/nfs_export/test       192.168.3.0/255.255.255.0


6. Make the Mount Permanent Across Reboot
Mount can be made permanent by adding that entry into /etc/fstab file.

root@anneke:~# cat /etc/fstab
192.168.3.6:/nfs_export/test    /mnt/test   ext4    noauto,vers=3,nolock,noatime,port=2049,proto=udp   0 0


(Options on the fstab depends on your system and what you need them for! You can always use "defaults" options :) )