Wednesday, August 17, 2016

UPDATED!!! HOWTO Cleanup Puppet Reports and DB

If the database for Puppet Dashboard is using several GB and getting larger everyday, this is a way to get some of the space back.

There are two rake jobs you should be running everyday as part of daily maintenance for Puppet Dashboard.

cd /usr/share/puppet-dashboard
env RAILS_ENV=production rake reports:prune upto=5 unit=day
env RAILS_ENV=production rake reports:prune:orphaned


You can change the RAILS_ENV and number of day (day), weeks (wk), months (mon), etc to match your system and its needs.

1. Stop incoming reports:

cd /path/to/puppet-dashboard
env RAILS_ENV=production script/delayed_job -p dashboard -m stop


2. Start deleting reports in small batches

Keep working your way in towards the length of time you want to keep reports for. The reason for this is Innodb tables have poor performance when deleting more than 10k rows at a time. If you try to deleting a few hundred thousand rows, it will timeout and you'll have to break it up into smaller deletes anyway. Also the Ruby rake process will use probably use all your RAM and likely get killed off by the kernel before it finishes. Something like this progression should work for most people, but if you have many months of data you may want to start with a month or two of your earliest records. In our case, we are keeping just 2 weeks reports (14 days).

env RAILS_ENV=production rake reports:prune upto=6 unit=mon
env RAILS_ENV=production rake reports:prune upto=4 unit=mon
env RAILS_ENV=production rake reports:prune upto=2 unit=mon
env RAILS_ENV=production rake reports:prune upto=3 unit=wk
env RAILS_ENV=production rake reports:prune upto=1 unit=wk
env RAILS_ENV=production rake reports:prune upto=5 unit=day


3. Determine the best method to reclaim space from MySQL

There are two methods to reclaim space depending on how MySQL was configured. Run this command to determine if "innodb_file_per_table" is enabled. It should be set to "ON" if it is.
NOTE: I recommend to use innodb on your MySQL for cases like this one.

mysqladmin variables -u root -p | grep innodb_file_per_table

You can also do a listing of the database to see if there are larger data files. The table most likely to be large is resource_statuses.ibd.

ls -lah /var/lib/mysql/dashboard_production
...
-rw-rw---- 1 mysql mysql      8.9K Jan 08 12:50 resource_statuses.frm
-rw-rw---- 1 mysql mysql       15G Jan 08 12:50 resource_statuses.ibd
...


4. Reclaiming space the easy way

If MySQL was configured with innodb_file_per_table and your Dashoard DB shows that your data is in large table files, do the following:

mysql -u root -p
use puppet_dashboard;
OPTIMIZE TABLE resource_statuses;

This will create a new table based on the current data and copy it into place. If you do a listing while this is in progress you should see something like this:

-rw-rw---- 1 mysql mysql       8.9K Jan  08 12:50 resource_statuses.frm
-rw-rw---- 1 mysql mysql        15G Jan  08 12:50 resource_statuses.ibd
-rw-rw---- 1 mysql mysql       8.9K Jan  08 12:50 #sql-379_415.frm
-rw-rw---- 1 mysql mysql       238M Jan  08 12:51 #sql-379_415.ibd


And when it finished it'll copy the tmp file into place. In this case we went from 15GB to 708MB.

-rw-rw---- 1 mysql mysql 8.9K Jan 08 13:01 resource_statuses.frm
-rw-rw---- 1 mysql mysql 708M Jan 08 13:03 resource_statuses.ibd



The optimization of the Database can also be done via rails:

root@pmaster01:~# cd /usr/share/puppet-dashboard
root@pmaster01:/usr/share/puppet-dashboard# env RAILS_ENV=production rake db:raw:optimize
Optimizing tables, this may take a while:
* delayed_job_failures
* delayed_jobs
* metrics
* node_class_memberships
* node_classes
* node_group_class_memberships
* node_group_edges
* node_group_memberships
* node_groups
* nodes
* old_reports
* parameters
* report_logs
* reports
* resource_events
* resource_statuses
* schema_migrations
* timeline_events
root@pmaster01:/usr/share/puppet-dashboard#

5. Truncate Tables:

When this Method does not work, I found that a truncate of the Table "resource_statuses" also worked in this way:

First make a dump of the table:
root@pmaster01:~# mysqldump dashboard_production resource_statuses > resource_statuses_20160817-1418.sql

Stop the Dashboard Workers:
root@pmaster01:~# /etc/scripts/dashboard_workers.sh stop
root@pmaster01:~# /etc/scripts/dashboard_workers.sh status
STATUS is NOT OK: 0 workers are running
root@pmaster01:~#

Now let's go for the truncate:

root@pmaster01:~# mysql dashboard_production
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2111
Server version: 5.5.37-0+wheezy1 (Debian)
Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show tables;
+--------------------------------+
| Tables_in_dashboard_production |
+--------------------------------+
| delayed_job_failures           |
| delayed_jobs                   |
| metrics                        |
| node_class_memberships         |
| node_classes                   |
| node_group_class_memberships   |
| node_group_edges               |
| node_group_memberships         |
| node_groups                    |
| nodes                          |
| old_reports                    |
| parameters                     |
| report_logs                    |
| reports                        |
| resource_events                |
| resource_statuses              |
| schema_migrations              |
| timeline_events                |
+--------------------------------+
18 rows in set (0.00 sec)
mysql> truncate table resource_statuses;
ERROR 1701 (42000): Cannot truncate a table referenced in a foreign key constraint (`dashboard_production`.`resource_events`, CONSTRAINT `fk_resource_events_resource_status_id` FOREIGN KEY (`resource_status_id`) REFERENCES `dashboard_production`.`resource_statuses` (`id`))

The error above is common in tables like this one with foreign keys; that's why we will run the truncate in the following way:

mysql> SET FOREIGN_KEY_CHECKS = 0;
Query OK, 0 rows affected (0.00 sec)
mysql> truncate table resource_statuses;
Query OK, 0 rows affected (0.46 sec)
mysql> SET FOREIGN_KEY_CHECKS = 1;
Query OK, 0 rows affected (0.00 sec)
mysql>
Now Start the Dasboard Workers again

root@pmaster01:~# /etc/scripts/dashboard_workers.sh start
root@pmaster01:~# /etc/scripts/dashboard_workers.sh status
Check the actual size of the Table:

root@pmaster01:~# l -h /var/lib/mysql/dashboard_production/resource_statuses*
-rw-rw---- 1 mysql mysql 8.9K Oct  7 12:21 resource_statuses.frm
-rw-rw---- 1 mysql mysql 9.0M Oct  7 12:48 resource_statuses.ibd
We passed from a Table Size of 1.7 GB to 9 MB.

6. Reclaiming space the hard way


If your system was not configured with innodb_file_per_table or all the current data resides in a large ibdata file, the only way to reclaim space is to wipe the entire installation and reimport all the data.
The overall method should be something like: First configure innodb_file_per_table, dump all the databases, then stop Mysql, delete /var/lib/mysql, run mysql_install_db to create /var/lib/mysql again, start MySQL, and finally reimport the data. There will be no need to the optimize steps because of the data import.

7. Finally, Restart the delayed_job:

cd /path/to/puppet-dashboard
env RAILS_ENV=production script/delayed_job -p dashboard -n 2 -m start


8. Daily Reports Cleanup and DB Maintenance:

For a daily Reports Cleanup you can create a simple BASH script who search the Reports on /var/lib/puppet/reports by time (mtime +14 in our case), remove them and then cleanup the DB with (upto=2 unit=wk) and set it in your crontab.
An example of the script can be:

#!/bin/bash
REPORTS=`find /var/lib/puppet/reports -type f -mtime +14`
for i in $REPORTS; do rm -f $i; done

cd /usr/share/puppet-dashboard

env RAILS_ENV=production rake reports:prune upto=2 unit=wk


Friday, June 3, 2016

UPDATED!!! HOWTO hpacucli/hpssacli

UPDATE 03.06.2016: Changed old "hpacucli" for the new "hpssacli"

Abbreviations: 
chassisname = ch
controller = ctrl 
logicaldrive = ld
physicaldrive = pd 
drivewritecache = dwc

There are two ways execute the command:

1. When you type the command hpacucli/hpssacli, it will display a “=>” prompt as shown below where you can enter all the hpacucli commands:

root@ximunix:~# hpssacli
HP Smart Storage Administrator CLI 2.10.14.0
Detecting Controllers...Done.
Type "help" for a list of supported commands.
Type "exit" to close the console.

=> ctrl all show

Smart Array P400 in Slot 1                (sn: XXXXXXXXXXXXXX)

=> quit

2. Or, if you don’t want to get to the hpacucli/hpssacli prompt, you can just enter the following directly in the Linux prompt:

root@ximunix:~# hpssacli ctrl all show

Smart Array P400 in Slot 1                (sn: XXXXXXXXXXXXXX)

Controller Commands

## Display detail of Controller
hpssacli ctrl all show config

## Display detail of Controller
hpssacli ctrl all show config detail

## Display status of Controller
hpssacli ctrl all show status

## Rescan for New Devices
hpssacli rescan

Physical Drive Commands

## Display detail information of all physical drives
hpssacli ctrl slot=0 pd all show

## Display detail information of single phiysical drive
hpssacli ctrl slot=0 pd 2:3 show detail

## Display status of all physical drives
hpssacli ctrl slot=0 pd all show status

## Display status of single physical drive
hpssacli ctrl slot=0 pd 2:3 show status

## To Erase the physical drive (WARN!)
hpssacli ctrl slot=0 pd 2:3 modify erase

## To enable the LED of a physical drive
hpssacli ctrl slot=0 pd 2:3 modify led=on

## To disable the LED of a physical drive
hpssacli ctrl slot=0 pd 2:3 modify led=off

Logical Drive Commands

## Display detail information of all logical drives
hpssacli ctrl slot=0 ld all show

## Display detail information of single logical drive
hpssacli ctrl slot=0 ld 4 show

## Display status of all logical drives
hpssacli ctrl slot=0 ld all show status

## Display status of single logical drive
hpssacli ctrl slot=0 ld 4 show status

## Re-enabling failed drive
hpssacli ctrl slot=0 ld 4 modify reenable forced

## Create logical drive with RAID 0 using one drive
hpssacli ctrl slot=0 create type=ld drives=1:12 raid=0

## Create LogicalDrive with RAID 1 using two drives
hpssacli ctrl slot=0 create type=ld drives=1:13,1:14 size=300 raid=1

## Create LogicalDrive with RAID 5 using five drives
hpssacli ctrl slot=0 create type=ld drives=1:13,1:14,1:15,1:16,1:17 raid=5

## Delete a specific logical drive
hpssacli> ctrl slot=0 ld 4 delete

## Expanding a logical drive by adding two more drives
hpssacli> ctrl slot=0 ld 4 add drives=2I:1:6,2I:1:7

## Extending the logical drive
hpssacli> ctrl slot=0 ld 4 modify size=500 forced

## Add two spare disks
hpssacli> ctrl slot=0 array all add spares=2I:1:6,2I:1:7

Caching

## Enable Cache
if you have a battery pack installed but your Drive Write Cache is still shown as “Disabled”, you can enable it using the command:

root@ximunix:~# hpssacli ctrl slot=0 modify dwc=enable

Warning: Without the proper safety precautions, use of write cache on physical 
         drives could cause data loss in the event of power failure.  To ensure
         data is properly protected, use redundant power supplies and
         Uninterruptible Power Supplies. Also, if you have multiple storage
         enclosures, all data should be mirrored across them. Use of this
         feature is not recommended unless these precautions are followed.
         Continue? (y/n) y

Warning is self-explaining I guess. Disks's cache aren't protected by controller's battery. It's up to you but I wouldn't enable such features if your power supply isn't protected.

## Disable Cache
hpssacli ctrl slot=0 modify dwc=disable

## Modify Accelerator Ratio (read/write):
hpssacli ctrl slot=0 modify cacheratio=25/75

## Enable Array Acceleration for one of your logical drives use:
hpssacli ctrl slot=0 ld 4 modify aa=enable

## Enable Array Acceleration for all of your logical drives use:
hpssacli ctrl slot=0 ld all modify arrayaccelerator=enable

Generate Diagnost Report

hpssacli ctrl all diag file=/tmp/ADUreport.zip ris=on xml=on zip=on

Hope you find it useful! ;)

UPDATED!!! HP-Tools/Firmware for ProLiant/Debian based Servers

Update 03.06.2016: The HP Links were changed to the new URLs.

To all the SysAdmin which always jump into this situation where you have to support HP ProLiant Servers under GNU/Debian and you don't find the Official Support from HP. Here are some tips&tricks.

WARNING: This blog, of course, does not infer official support from HP. Information on official HP support offerings for Debian can be found on http://hp.com/go/debian

Debian + bnx2 Firmware

First of all, and as how many of you know, to Install a new Debian on HP ProLiant servers, we need the bnx2 Firmware.
If you are a lazy SysAdmin like me, and you don't want to build the CD with Firmware by your own, there is a non-free Repo where you can find Debian including the bnx2 firmware:
http://cdimage.debian.org/cdimage/unofficial/non-free/cd-including-firmware/

The cool thing of this, is that It Works!  ;)

Now, after you got your Debian pretty installed and running in your cool HP ProLiant server, here are some tools that you may need in your HP Box.

HP-Tools

 Sources


wget http://downloads.linux.hpe.com/SDR/add_repo.sh
chmod +x add_repo.sh
./add_repo.sh mcp

For example:

root@test01:~# ./add_repo.sh mcp
note : You must read and accept the License Agreement to continue.
Press enter to display it ...
 END USER LICENSE AGREEMENT
 PLEASE READ CAREFULLY: THE USE OF THE SOFTWARE IS SUBJECT TO THE TERMS AND CONDITIONS THAT FOLLOW (AGREEMENT), UNLESS THE SOFTWARE IS SUBJECT TO A SEPARATE LICENSE AGREEMENT BETWEEN YOU AND HP OR ITS SUPPLIERS.  BY DOWNLOADING, INSTAL
LING, COPYING, ACCESSING, OR USING THE SOFTWARE, OR BY CHOOSING THE I ACCEPT OPTION LOCATED ON OR ADJACENT TO THE SCREEN WHERE THIS AGREEMENT MAY BE DISPLAYED, YOU AGREE TO THE TERMS OF THIS AGREEMENT, ANY APPLICABLE WARRANTY STATEMENT
 AND THE TERMS AND CONDITIONS CONTAINED IN THE ANCILLARY SOFTWARE  (as defined below). IF YOU ARE ACCEPTING THESE TERMS ON BEHALF OF ANOTHER PERSON OR A COMPANY OR OTHER LEGAL ENTITY, YOU REPRESENT AND WARRANT THAT YOU HAVE FULL AUTHOR
ITY TO BIND THAT PERSON, COMPANY, OR LEGAL ENTITY TO THESE TERMS.  IF YOU DO NOT AGREE TO THESE TERMS, DO NOT DOWNLOAD, INSTALL, COPY, ACCESS, OR USE THE SOFTWARE, AND PROMPTLY RETURN THE SOFTWARE WITH PROOF OF PURCHASE TO THE PARTY FROM
 WHOM YOU ACQUIRED IT AND OBTAIN A REFUND OF THE AMOUNT YOU PAID, IF ANY.  IF YOU DOWNLOADED THE SOFTWARE, CONTACT THE PARTY FROM WHOM YOU ACQUIRED IT.
 QUANTITY OF DEVICES:
 1.   GENERAL TERMS
...
...
...
Do you accept? (yes/no) yes
info : Repo added to /etc/apt/sources.list.d/HP-mcp.list.
root@test01:~# less /etc/apt/sources.list.d/HP-mcp.list

# auto-generated by
 
# By including and using this configuration,
# you agree to the terms and conditions
# of the HP Software License Agreement at
 
# HP Software Delivery Repository for mcp
deb http://downloads.linux.hpe.com/SDR/repo/mcp jessie/current non-free

root@test01:~#


And then we download the GPG-KEY:

# corresponding to http://downloads.linux.hpe.com/faq.html:
wget http://downloads.linux.hpe.com/SDR/repo/mcp/GPG-KEY-mcp -O - | apt-key add -
apt-get update

 

OR

Add this line to your APT sources list:

# HP-TOOLS
deb http://downloads.linux.hpe.com/SDR/downloads/MCP/debian jessie/current non-free


Add HP-apt Key:
wget http://downloads.linux.hpe.com/SDR/downloads/MCP/GPG-KEY-mcp -O - | apt-key add -

Then run apt-get update and install any of the packages you might need.

Installing Individual Packages

 

HP System Health Application and Command line Utilities (hp-health)

The HP System Health Application and Command Line Utilities (hp-health) is a collection of applications and tools which enables monitoring of fans, power supplies, temperature sensors, and other management events. It also provides collection of command-line utilities: the ProLiant boot configuration utility (hpbootcfg), the ProLiant Management Command Line Interface Utility (hpasmcli), the ProLiant Integrated Management Log (IML) Utility (hplog), and the UID (blue) Light Utility (hpuid). To install the hp-health package, run:

apt-get install hp-health

Here we can have some interesting usage for hpasmcli:

root@anneke:~$ hpasmcli
HP management CLI for Linux (v1.0)
Copyright 2004 Hewlett-Packard Development Group, L.P.

--------------------------------------------------------------------------
NOTE: Some hpasmcli commands may not be supported on all Proliant servers.
      Type 'help' to get a list of all top level commands.
--------------------------------------------------------------------------
hpasmcli> help
CLEAR  DISABLE  ENABLE  EXIT  HELP  NOTE  QUIT  REPAIR  SET  SHOW
hpasmcli>


As it can be seen in the above example several main tasks can be done, to get the usage of every command simply use HELP followed by the command.

hpasmcli> help show
USAGE: SHOW [ ASR | BOOT | DIMM | F1 | FANS | HT | IML | IPL | NAME | PORTMAP | POWERSUPPLY | PXE | SERIAL | SERVER | TEMP | UID | WOL ]
hpasmcli>
hpasmcli> HELP SHOW BOOT
USAGE: SHOW BOOT: Shows boot devices.
hpasmcli>


The scripting mode hpasmcli can be used directly from the shell prompt with the -s option and the command between quotation marks. It can be an easier way, or just a simple way to run some scripts or checks for the health status of our Servers. For example:

root@anneke:~$ hpasmcli -s "show server"

System        : ProLiant DL380 G5
Serial No.    : xxxxxx
ROM version   : P56 05/02/2011
iLo present   : Yes
Embedded NICs : 2
        NIC1 MAC: x:x:x:x:x:x
        NIC2 MAC: x:x:x:x:x:x

Processor: 0
        Name         : Intel Xeon
        Stepping     : 6
        Speed        : 2500 MHz
        Bus          : 1333 MHz
        Core         : 4
        Thread       : 4
        Socket       : 1
        Level1 Cache : 128 KBytes
        Level2 Cache : 12288 KBytes
        Status       : Ok

Processor: 1
        Name         : Intel Xeon
        Stepping     : 6
        Speed        : 2500 MHz
        Bus          : 1333 MHz
        Core         : 4
        Thread       : 4
        Socket       : 2
        Level1 Cache : 128 KBytes
        Level2 Cache : 12288 KBytes
        Status       : Ok

Processor total  : 2

Memory installed : 32768 MBytes
ECC supported    : Yes


or:

root@anneke:~$ hpasmcli -s "show fan; show temp"

Fan  Location        Present Speed  of max  Redundant  Partner  Hot-pluggable
---  --------        ------- -----  ------  ---------  -------  -------------
#1   I/O_ZONE        Yes     NORMAL  45%     Yes        0        Yes          
#2   I/O_ZONE        Yes     NORMAL  45%     Yes        0        Yes          
#3   PROCESSOR_ZONE  Yes     NORMAL  41%     Yes        0        Yes          
#4   PROCESSOR_ZONE  Yes     NORMAL  36%     Yes        0        Yes          
#5   PROCESSOR_ZONE  Yes     NORMAL  36%     Yes        0        Yes          
#6   PROCESSOR_ZONE  Yes     NORMAL  36%     Yes        0        Yes          


Sensor   Location              Temp       Threshold
------   --------              ----       ---------
#1        I/O_ZONE             48C/118F   70C/158F
#2        AMBIENT              23C/73F    39C/102F
#3        CPU#1                36C/96F    127C/260F
#4        CPU#1                36C/96F    127C/260F
#5        SYSTEM_BD            50C/122F   77C/170F
#6        CPU#2                36C/96F    127C/260F
#7        CPU#2                36C/96F    127C/260F


or:

root@anneke:~$ hpasmcli -s "show dimm" | egrep "Module|Status"
Module #:                     1
Status:                       Ok
Module #:                     2
Status:                       Ok
Module #:                     3
Status:                       Ok
Module #:                     4
Status:                       Ok
Module #:                     5
Status:                       Ok
Module #:                     6
Status:                       Ok
Module #:                     7
Status:                       Ok
Module #:                     8
Status:                       Ok



and many more... you can play a bit with them! ;)

HP RILOE II/iLO online configuration utility (hponcfg)

Hponcfg is a command line utility that can be used to configure iLO/RILOE II from within the operating system without requiring a reboot of the server. To install the hponcfg package, run:

apt-get install hponcfg

Say you want to check the Firmware version of you iLO:
root@anneke:~$  hponcfg | grep Firmware
Firmware Revision = 1.22 Device type = iLO 3 Driver name = hpilo
root@anneke:~$


Insight Management SNMP Agents for HP ProLiant Systems (hp-snmp-agents)

The HP SNMP Agents (hp-snmp-agents) is a collection of SNMP protocol based agents and tools which enables monitoring of fans, power supplies, temperature sensors and other management events via SNMP. To install the hp-snmp-agents package, run:

apt-get install hp-snmp-agents

To configure the hp-snmp-agents package, run:

/sbin/hpsnmpconfig

Finally, restart the hp-snmp-agents service:

/etc/init.d/hp-snmp-agents restart

Note: In some configurations, the following message will be displayed when hp-snmp-agents starts:

    FATAL: Module sg not found.

This message can be safely ignored... (so it says HP :P )

HP System Management Homepage (hpsmh)

The HP System Management Homepage (hpsmh) provides a consolidated view for single server management highlighting tightly integrated management functionalities including performance, fault, security, diagnostic, configuration, and software change management. To install the hpsmh package, run:

apt-get install hpsmh

Note: You may see the following message the first time you attempt to install hpsmh:

I/O warning : failed to load external entity "/opt/hp/hpsmh/conf/smhpd.xml"


If you see this message, you will need to restart the hpsmh service before you can make use of it. To restart the hpsmh service, run:

/etc/init.d/hpsmhd restart

HP System Management Homepage Templates (hp-smh-templates)

The HP System Management Homepage Templates for Linux (hp-smh-templates) contains the System Management Homepage Templates for Server, Management processor, NIC and Storage subsystems. The templates are a collection of html, javascript and php files that act as a GUI to display the SNMP data provided by each subsystems agent(s). This package is dependant on the hp-snmp-agents package and also on the hpsmh package to serve the pages to the browser. To install the hp-smh-templates package, run:

apt-get install hp-smh-templates

HP Command Line Array Configuration Utility (hpacucli/hpssacli)

The Array Configuration Utility CLI (hpacucli) is a command line based disk configuration program that allows you to configure Smart Array Controllers and RAID Array Controllers.

For this special friend, we have another Repo which has the software more up2date!


Add this line to your APT sources list: (if the package is not already working with the previous repo (see at the beginning of the post))

deb http://downloads.linux.hpe.com/SDR/downloads/MCP jessie/current non-free


Packages are now signed, please run the following command after adding the repository to sources.list:
wget -O - http://downloads.linux.hpe.com/SDR/downloads/MCP/debian/dists/jessie/current/Release.gpg | sudo apt-key add -

Now Run:
apt-get update
apt-get install hpssacli


And  for a quick guide of the commands of this tool, you can take a look on my special guide ;)
http://www.datadisk.co.uk/html_docs/redhat/hpacucli.htm

HP Array Configuration Utility (cpqacuxe)

The HP Array Configuration Utility (cpqacuxe) is a web-based disk configuration utility for HP array controllers. To install cpqacuxe package, run:

apt-get install cpqacuxe

To enable the use of the web-based Array Configuration Utility, you must first manually start the cpqacuxe service from the command line:

/usr/sbin/cpqacuxe


 Hope this helps to the lost souls that need some of these things!

Friday, February 26, 2016

How to prevent updating of a specific package on Debian

If you want to make a dist-upgrade and avoid to upgrade a specific package, here you have some options:

Using dpkg

Displaying the status of your packages

dpkg --get-selections

Displaying the status of a single package

dpkg --get-selections | grep "package"

Put a package on hold

echo "package hold" | sudo dpkg --set-selections

Remove the hold

echo "package install" | sudo dpkg --set-selections


Using apt


Hold a package using:

apt-mark hold package_name

Remove the hold with:

apt-mark unhold package_name

HOWTO create Partition larger than 2TB with parted

1- First we will find the Current Disk Size

fdisk -l /dev/sdb
  
Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table

2- Now, we will use parted’s mklabel command to set the disk label to GPT (GUID partition table format (GPT)):

parted /dev/sdb
GNU Parted 2.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.

(parted) print
Error: /dev/sdb: unrecognised disk label

(parted) mklabel gpt

(parted) print
Model: Unknown (unknown)
Disk /dev/sdb: 3000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

...

Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted)

3- Next, set the default unit to TB, enter:

(parted) unit TB

4- Create a 3TB partition size:

(parted) mkpart primary 0 0
#OR
(parted) mkpart primary 0.00TB 3.00TB

5- To print the current partitions, enter:

(parted) print

Model: ATA ST33000651AS (scsi)
Disk /dev/sdb: 3.00TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number  Start   End     Size    File system  Name     Flags
 1      0.00TB  3.00TB  3.00TB  ext4         primary

Quit and save the changes, enter:

(parted) quit

Information: You may need to update /etc/fstab.

6- Format the filesystem:
# mkfs.ext4 /dev/sdb1

7- Edit fstab and mount the Filesystem:
vi /etc/fstab
# add the following line:
/dev/sdb1       /data   ext4    defaults    0   2

mkdir /data
mount /data
df -h

Filesystem             Size   Used  Avail Use% Mounted on
/dev/sdc1               16G   819M    14G   6% /
tmpfs                  1.6G      0   1.6G   0% /lib/init/rw
udev                   1.6G   123k   1.6G   1% /dev
tmpfs                  1.6G      0   1.6G   0% /dev/shm
/dev/sdb1              3.0T   211M   2.9T   1% /data

And that will do it :)

Tuesday, February 16, 2016

HP ProLiant MicroServer Gen8 with Debian Jessie (Setup, Installation and Configuration)

So... I've got a HP ProLiant MicroServer Gen8 and I wanted to share with you the setup and my experience with it.


Server Hardware and Characteristics:


Product Name: ProLiant MicroServer Gen8
Product ID: 819185-421
iLO 4: Firmware Version 2.30

CPU: Intel(R) Celeron(R) CPU G1610T @ 2.30GHz

RAM: 6 GB (Thanks to one of my colleagues, who gave me extra 2GB RAM. @Rainer!)



Smart Array Controller:

The Server comes with a HP Dynamic Smart Array b120i RAID Controller which doesn't have support for Linux Distros, so I've bought a: HP Smart Array P410 Controller which is working really nice and smoothly.
I bought an Array Controller because I didn't want to deal with Software RAID and because on my experience, the HP Smart Array Controller does a really sweet job and the Arrays are easy to expand, manage, etc.



Hard Drives:
  • For the OS: SSD 128 GB (Here again, thanks to @Rainer! :))
  • For Data RAID: 2x4TB Western Digital as RAID 1



Software and Configurations:


OS

Debian Jessie 8.3 (Basic Installation + SSH Server)

Partitions


SSD Drive:


Device        Start       End   Sectors  Size Type
/dev/sdb1      2048      4095      2048    1M BIOS boot
/dev/sdb2      4096  29300735  29296640   14G Linux filesystem
/dev/sdb3  29300736  60551167  31250432 14.9G Linux swap
/dev/sdb4  60551168 250068991 189517824 90.4G Linux LVM

Where the Linux LVM Partition was configured as follow:

  VG     #PV #LV #SN Attr   VSize  VFree
  vgsys    1   6   4 wz--n- 90.37g 30.37g

  LV              VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvhome          vgsys  owi-aos--- 20.00g
  lvvar           vgsys  owi-aos--- 20.00g

Filesystem                Size  Used Avail Use% Mounted on
/dev/mapper/vgsys-lvhome   20G   45M   19G   1% /home
/dev/mapper/vgsys-lvvar    20G  469M   19G   3% /var 

Data Array:


Device     Start        End    Sectors  Size Type
/dev/sda1   2048 7813971598 7813969551  3.7T Linux filesystem

For now, I didn't use the entire partition for the LV, but as it'S an LVM partition, I can extend it at anytime when needed.

  VG     #PV #LV #SN Attr   VSize  VFree
  vgdata   1   3   2 wz--n-  3.64t  1.59t

  LV              VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvdata          vgdata owi-aos---  2.00t

Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/vgdata-lvdata  2.0T  296G  1.6T  16% /data

Both drives have also configured LVM Snapshots Backups (more on that later on)

Extra Software:


HP Tools


For now, I just installed the following HP Software:
ii  hp-health                        10.0.0.1.3-4.               amd64        hp System Health Application and Command line Utility Package
ii  hponcfg                          4.4.0.8-2.                  amd64        RILOE II/iLo online configuration utility
ii  hpssacli                         2.10-14.0                   amd64        HP Command Line Smart Storage Administration Utility

More Info on how to Configure the Oficial HP Repositories and installed the HP Tools can be found here: HOWTO HP Tools


solaar


I'm the owner of a Logitech Illuminated Living-Room Keyboard k830, so I installed solaar to make it work. It's easy as a pie and it work wonders.

Just install solaar, connect your keyboard and magic happens :P

apt-get install solaar

To check if your keyboard was paired, just type the following:

root@ragnar:~# solaar-cli show
Unifying Receiver [/dev/hidraw0:003B91DE] with 1 devices
1: Illuminated Living-Room Keyboard K830 [K830:94EEE1AD]

Cool, innit'? ;)

Desktop Environment


At the beginning I was thinking to don't install any Desktop Environment, but for the other person living on the same roof, it was easier as the command line, so here it's:

LXDE + lightDM

apt-get install lxde-core
apt-get install lightdm


Networking


  • iLO Port: 
iLO Homepage -> Network -> iLO Dedicated Network Port -> IPv4 Tab ->
Disable DHCP and add the respective values on the following fields:
    • IPv4 Address
    • Subnet Mask
    • Gateway IPv4 Address

Save and then Reset iLO


  • 1 Port for OS with bonding:

This is how my /etc/network/interfaces looks like:

# The primary network interface
auto bond0
iface bond0 inet static
    address 192.168.2.xxx
    netmask 255.255.255.0
    network 192.168.2.0
    gateway 192.168.2.1
    broadcast 192.168.2.255
    slaves eth0 eth1
    bond_mode active-backup
    bond_miimon 100
    bond_downdelay 200
    bond_updelay 200

For now the Second Port does not have a link, but even though, the bond is configured.

To know what I'm talking about, here is an overview of the ip a l command output:

2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether b0:5a:da:87:ee:d0 brd ff:ff:ff:ff:ff:ff
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master bond0 state DOWN group default qlen 1000
    link/ether b0:5a:da:87:ee:d0 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether b0:5a:da:87:ee:d0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.xxx/24 brd 192.168.2.255 scope global bond0
       valid_lft forever preferred_lft forever

Backups


LVM-Snapshots


So, LVM Snapshots where configured with a Script to run every day and have 2 Snapshots (1 per day)

As I don't have time to explain in detail how this works (and I'm sorry about it), here is some useful HOWTO from the people of "HowtoForge": Back Up (And Restore) LVM Partitions With LVM Snapshots

Rsnapshot


Useful and nice Backup choice. As I said before, no time to explain in detail, but here is the official site on github: Rsnapshot

For my Server, I configured to backup the following Directories:

###############################
### BACKUP POINTS / SCRIPTS ###
###############################

# LOCALHOST
backup /home/ ragnar/
backup /etc/ ragnar/
backup /opt/ ragnar/

with the following retention:

# BACKUP INTERVALS #
retain hourly 6
retain daily 7
retain weekly 4
retain monthly 12


Kernel Tunning


swappiness


I've set the vm.swappiness to 10. Here you can find more info about how swappiness does work and why I choosed 10 as a value for my Server: vm.swappiness




So this is all for now, if you have any questions, please post them on the Comments sections and I'll be happy to answer as soon as I get the time :)


Notes: IPTables, Dynamic DNS and more coming soon...

Monday, February 8, 2016

HOWTO Convert Unix timestamp into human readable date using MySQL

Just as easy as the typing the following in MySQL:

mysql> select from_unixtime(1445506564);
+---------------------------+
| from_unixtime(1445506564) |
+---------------------------+
| 2015-10-22 11:36:04       |
+---------------------------+
1 row in set (0.00 sec)

mysql>

Tuesday, January 12, 2016

What is Swappiness (vm.swappiness) and how to change it on your Linux Server

What is Swappiness?

In Linux, the "swap area" is a dedicated space in your hard drive that is usually set to at least twice the capacity of your RAM, and along with it constitutes the total virtual memory of your system. From time to time, the Linux kernel utilizes this swap space by copying chunks from your RAM to the swap, allowing active processes that require more memory than it is physically available to run.

Swappiness is the kernel parameter that defines how much (and how often) your Linux kernel will copy RAM contents to swap. This parameter's default value is “60” and it can take anything from “0” to “100”. The higher the value of the swappiness parameter, the more aggressively your kernel will swap.

Why change it?

The default value is an one-fit-all solution that can't possibly be equally efficient in all of the individual use cases, hardware specifications and user needs. Moreover, the swappiness of a system is a primary factor that determines the overall functionality and speed performance of an OS. That said, it is very important to understand how swappiness works and how the various configurations of this element could improve the operation of your system and thus your everyday usage experience.

As RAM memory is so much larger and cheaper than it used to be in the past, there are many users nowadays that have enough memory to almost never need to use the swap file. The obvious benefit that derives from this is that no system resources are ever occupied by the swapping process and that cached files are not moved back and forth from the RAM to the swap and vise versa for no reason.

Factors for consideration

There are some maths involved in the swappiness that should be considered when changing your settings. The parameter value set to “60” means that your kernel will swap when RAM reaches 40% capacity. Setting it to “100” means that your kernel will try to swap everything. Setting it to 10 (like I did on this tutorial) means that swap will be used when RAM is 90% full, so if you have enough RAM memory, this could be a safe option that would easily improve the performance of your system.

Some users though want the full cake and that means that they set swapping to “1” or even “0”. “1” is the minimum possible “active swapping” setting while “0” means disable swapping completely and only revert to when RAM is completely filled. While these settings can still theoretically work, testing it in low-spec systems of 2GB RAM or less may cause freezes and make the OS completely unresponsive. Generally, finding out what the golden means  between overall system performance and response latency requires quite some experimentation (as always).

Here some help to understand the values:
vm.swappiness = 0 -> The kernel will swap only to avoid an out of memory condition. See the "VM Sysctl documentation".
vm.swappiness = 1 -> Kernel version 3.5 and over, as well as kernel version 2.6.32-303 and over: Minimum amount of swapping without disabling it entirely.
vm.swappiness = 10 -> This value is sometimes recommended to improve performance when sufficient memory exists in a system.
vm.swappiness = 60 -> Default value.
vm.swappiness = 100 -> The kernel will swap aggressively.

How to change it?

The swappiness parameter value is stored in a simple configuration text file: /proc/sys/vm/swappiness. To check the value that it's setted on your system, you can do:
user@ximunix:~$ sudo cat /proc/sys/vm/swappiness
60

You can change the value by editing the file or by running:
user@ximunix:~$ echo 10 > /proc/sys/vm/swappiness
OR:
user@ximunix:~$ sudo sysctl vm.swappiness=10

Or whatever value between 0 and 100 that fits your system and needs.

To ensure that the swappiness value was correctly changed to the desired one, you simply type:
user@ximunix:~$ sudo cat /proc/sys/vm/swappiness
10

This change has an immediate effect in your system's operation and thus no rebooting is required. In fact, rebooting will revert the swappiness back to its default value (60). If you have thoroughly tested your desired swapping value and you found that it works reliably, you can make the change permanent by navigating to /etc/sysctl.conf. You may open this as root and add the following line on the bottom to determine the swappiness:
vm.swappiness=10

Then, save the text file and you're done!

HOWTO stop the Apache “internal dummy connection” from logging

If you see many of these entries on you Apache Website Access Log:

::1 - - [12/Jan/2016:06:25:02 +0100] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Debian) (internal dummy connection)"

You can easily prevent these entries logging by doing the following:

In our case, every VHost has its own Log File, that's we see many of these entries on the "access.log" or "other_vhosts_access.log"

To change this we can edit the following VHost file, that comes activated by default in Apache:

vim /etc/apache2/sites-available/default

and here replace the following entry:

CustomLog ${APACHE_LOG_DIR}/access.log combined

for these lines:

#Prevent logging for local requests
SetEnvIf Remote_Addr "127\.0\.0\.1" dontlog 
SetEnvIf Remote_Addr "::1" dontlog

CustomLog ${APACHE_LOG_DIR}/access.log combined env=!dontlog

After that, we can reload apache2 and it should work right away. :)

service apache2 reload