Cisco Announces Definitive Agreement to Acquire PostPath

PostPath’s Email and Calendaring Software to Enhance Cisco’s WebEx Collaboration Platform

SAN JOSE, Calif. - August 27, 2008 - Building upon its commitment to provide a comprehensive collaboration portfolio, Cisco today announced its intent to acquire privately held PostPath, Inc., a provider of innovative email and calendaring software. Based in Mountain View, Calif. with additional development operations in Sofia, Bulgaria, PostPath will enhance the existing email and calendaring capabilities of Cisco’s WebEx Connect collaboration platform.

In today’s fast-paced business environment, effective, adaptive collaboration is critical to creating and sustaining a competitive advantage. With PostPath’s software, Cisco will extend the e-mail and calendar functionality of its flexible software-as-a-service (SaaS)-based collaborative platform that includes instant messaging, voice, video, data, document management and Web 2.0 applications. This combination will enable customers to use collaboration to accelerate business processes, within and between businesses.

In today’s fast-paced business environment, effective, adaptive collaboration is critical to creating and sustaining a competitive advantage. With PostPath’s software, Cisco will extend the e-mail and calendar functionality of its flexible software-as-a-service (SaaS)-based collaborative platform that includes instant messaging, voice, video, data, document management and Web 2.0 applications. This combination will enable customers to use collaboration to accelerate business processes, within and between businesses.

“The acquisition of PostPath complements our strategy to develop an integrated collaboration platform designed for how we work today and into the future, providing real productivity gains and a more satisfying user experience”, said Doug Dennerline, Cisco senior vice president, Collaboration Software Group. “Our ‘cloud-based’ delivery model offers our customers rapid deployment and compelling economics.”

PostPath offers a Linux-based e-mail, calendaring and collaboration solution. It is interoperable with many other e-mail solutions and provides a browser-independent AJAX Web client. In addition, PostPath’s software is compatible with a number of mobile clients.

PostPath’s software is highly secure and scalable, and it incorporates innovative Web 2.0 architectures to meet the requirements of large enterprises and small businesses alike to provide Cisco customers with a next-generation user experience.

The PostPath acquisition exemplifies Cisco’s “build, buy, and partner” innovation strategy to move quickly into new markets and capture key market transitions. In addition to internal software innovations, Cisco actively employs investments in, and acquisitions of, other companies to support its software strategy; recent purchases include industry leaders WebEx, IronPort and Securent.

Under the terms of the agreement, Cisco will pay approximately $215 million in exchange for all shares of PostPath. The transaction will be accounted for in accordance with generally accepted accounting principles. The acquisition is subject to various standard closing conditions and is expected to be complete in Cisco’s first quarter of fiscal year 2009. Upon completion of the acquisition, PostPath employees will become part of the Cisco Collaboration Software Group (CSG). CSG is part of the recently established Software Group, consisting of Cisco’s major software businesses; including the IOS network operating system, network and service management, Unified Communications solutions, policy management, and SaaS offerings.

Continue reading Cisco Announces Definitive Agreement to Acquire PostPath

CCNA Security Certification

CCNA Security Certification

CCNA Security Certification meets the needs of IT professionals who are responsible for network security. It confirms an individual’s skills for job roles such as Network Security Specialists, Security Administrators, and Network Security Support Engineers. This certification validates skills including installation, troubleshooting and monitoring of network devices to maintain integrity, confidentiality and availability of data and devices and develops competency in the technologies that Cisco uses in its security structure.

Students completing the recommended Cisco training will gain an introduction to core security technologies as well as how to develop security policies and mitigate risks. IT organizations that employ CCNA Security-holders will have IT staff that can develop a security infrastructure, recognize threats and vulnerabilities to networks, and mitigate security threats.

More information can be found here: http://www.cisco.com/web/learning/le3/le2/le0/le1/learning_certification_type_home.html

Continue reading CCNA Security Certification

Getting MySQL Status Values With mysqlreport

mysqlreport is a Perl script that displays a well-formatted report of important MySQL status variables (taken from MySQL's SHOW STATUS; output) that can help you gain an understanding of what is happening under MySQL's hood. It can help diagnose problems.

I do not issue any guarantee that this will work for you!

1 Preliminary Note

mysqlreport works on any distribution. Of course, Perl and MySQL must already be installed and working.

2 Installing mysqlreport

The installation is very easy. Just run:

cd /usr/local/sbin
wget hackmysql.com/scripts/mysqlreport
chmod 755 mysqlreport
cd /

That's it!

3 Using mysqlreport

Run

mysqlreport --help

to get a list of available options:

server2:/# mysqlreport --help
mysqlreport v3.2 May 26 2007
mysqlreport makes an easy-to-read report of important MySQL status values.

Command line options (abbreviations work):
--user USER Connect to MySQL as USER
--password PASS Use PASS or prompt for MySQL user's password
--host ADDRESS Connect to MySQL at ADDRESS
--port PORT Connect to MySQL at PORT
--socket SOCKET Connect to MySQL at SOCKET
--no-mycnf Don't read ~/.my.cnf
--infile FILE Read status values from FILE instead of MySQL
--outfile FILE Write report to FILE
--email ADDRESS Email report to ADDRESS (doesn't work on Windows)
--flush-status Issue FLUSH STATUS; after getting current values
--relative X Generate relative reports. If X is an integer,
reports are live from the MySQL server X seconds apart.
If X is a list of infiles, reports are generated
from the infiles in the order that the infiles are given.
--report-count N Collect N number of live relative reports (default 1)
--detach Fork and detach from terminal (run in background)
--help Prints this
--debug Print debugging information

Extra Reports:
--dtq Show Distribution of Total Questions
--dms Show DMS details
--com N Show top N number of non-DMS questions
--sas Show SELECT and Sort report
--qcache Show Query Cache report
--tab Show Thread, Aborts, and Bytes reports
--innodb Show InnoDB report
--innodb-only Show only InnoDB report (hide ALL other reports)
--dpr Show Data, Pages, Rows report in InnoDB report
--all Show ALL extra reports (if possible)

Visit http://hackmysql.com/mysqlreport for more information.
server2:/#

The standard usage of mysqlreport is as follows:

mysqlreport --user root --password

server2:/# mysqlreport --user root --password
Password for database user root: xxxxxxx
MySQL 4.0.21-log uptime 533 16:36:2 Tue Nov 27 15:29:50 2009

__ Key _________________________________________________________________
Buffer used 15.22M of 16.00M %Used: 95.13
Write hit 60.57%
Read hit 99.50%

__ Questions ___________________________________________________________
Total 1.88G 40.7/s
Slow 594 0.0/s %Total: 0.00 %DMS: 0.00
DMS 57.33M 1.2/s 3.05

__ Table Locks _________________________________________________________
Waited 4.51k 0.0/s %Total: 0.01
Immediate 72.89M 1.6/s

__ Tables ______________________________________________________________
Open 64 of 64 %Cache: 100.00
Opened 4.04M 0.1/s

__ Connections _________________________________________________________
Max used 354 of 500 %Max: 70.80
Total 5.48M 0.1/s

__ Created Temp ________________________________________________________
Disk table 166.53k 0.0/s
Table 1.23M 0.0/s
File 10 0.0/s
server2:/#

Continue reading Getting MySQL Status Values With mysqlreport
, ,

Running MySQL 4 And MySQL 5 Concurrently

This tutorial shows how to install MySQL 5 on a system where MySQL 4 is already running. It also shows how to configure phpMyAdmin to use both databases.

1 Download and install MySQL 5.x

Download the source code from http://dev.mysql.com/downloads/mysql/5.0.html#source

tar -zxvf mysql.version.tgz
cd mysql.version
./configure --prefix=/var/lib/mysql5 \
--with-unix-socket-path=/var/lib/mysql5/mysql5.sock \
--with-tcp-port=3307
make
make install

2 Create an appropriate cnf/ini file so that mysql will know where to place the data files and other configuration options.

vi /etc/my5.cnf

Below is a sample file.

# Example MySQL config file for large systems.
## This is for a large system with memory = 512M where the system runs mainly MySQL.
## You can copy this file to
# /etc/my.cnf to set global options,
# mysql-data-dir/my.cnf to set server-specific options (in this
# installation this directory is /var/lib/mysql5/var) or
# ~/.my.cnf to set user-specific options.
## In this file, you can use all long options that a program supports.
# If you want to know which options a program supports, run the program
# with the "--help" option.
# The following options will be passed to all MySQL clients
#[client]
#password = your_password
#port = 3307
#socket = /var/lib/mysql5/mysql5.sock
# Here follows entries for some specific programs
# The MySQL server
[mysqld]
port = 3307
socket = /var/lib/mysql5/mysql5.sock
old_passwords=1
skip-locking
key_buffer = 128M
max_allowed_packet = 1M
table_cache = 256
sort_buffer_size = 1M
read_buffer_size = 1M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size= 16M
[mysql.server]
user=mysql
[mysql]
default-character-set=latin1
[mysqld_safe]
err-log=/var/log/mysqld_5.log
pid-file=/var/lib/mysql5/mysqld5.pid

:wq to save the file.

Run this for install database directory.

./scripts/mysql_install_db --defaults-file=/etc/my5.cnf --user=mysql

Enter this line in /etc/rc.local to pin mysql5 when the system starts:

/var/lib/mysql5/bin/mysqld_safe --defaults-file=/etc/my5.cnf --user=mysql &

3 Now configure phpMyAdmin to access both the servers MySQL 4.x and 5.x. Below is a sample of the config.inc.php file.

/* Servers configuration */
$i = 0;
/* Server DiademGW_MySQL-4 (cookie) [1] */
$i++;
$cfg['Servers'][$i]['host'] = 'localhost';
$cfg['Servers'][$i]['extension'] = 'mysql';
$cfg['Servers'][$i]['connect_type'] = 'tcp';
$cfg['Servers'][$i]['compress'] = false;
$cfg['Servers'][$i]['auth_type'] = 'cookie';
$cfg['Servers'][$i]['verbose'] = 'MySQL-4';
/* Server DiademGW_MySQL-5 (cookie) [2] */
$i++;
$cfg['Servers'][$i]['host'] = 'localhost';
$cfg['Servers'][$i]['extension'] = 'mysql';
$cfg['Servers'][$i]['port'] = '3307';
$cfg['Servers'][$i]['socket'] = '/var/lib/mysql5/mysql5.sock'; /*actual socket path*/
$cfg['Servers'][$i]['connect_type'] = 'socket';
$cfg['Servers'][$i]['compress'] = false;
$cfg['Servers'][$i]['auth_type'] = 'cookie';
$cfg['Servers'][$i]['verbose'] = 'MySQL-5';
/* End of servers configuration */
$cfg['blowfish_secret'] = '475e8ba09cb6c4.57557095';
?>



Continue reading Running MySQL 4 And MySQL 5 Concurrently
,

How To Repair MySQL Replication

If you have set up MySQL replication, you probably know this problem: sometimes there are invalid MySQL queries which cause the replication to not work anymore. In this short guide I explain how you can repair the replication on the MySQL slave without the need to set it up from scratch again.

I do not issue any guarantee that this will work for you!

1 Identifying The Problem

To find out whether replication is/is not working and what has caused to stop it, you can take a look at the logs. On Debian, for example, MySQL logs to /var/log/syslog:

grep mysql /var/log/syslog

server1:/home/admin# grep mysql /var/log/syslog
May 29 09:56:08 http2 mysqld[1380]: 080529 9:56:08 [ERROR] Slave: Error 'Table 'mydb.taggregate_temp_1212047760' doesn't exist' on query. Default database: 'mydb'. Query: 'UPDATE thread AS thread,taggregate_temp_1212047760 AS aggregate
May 29 09:56:08 http2 mysqld[1380]: ^ISET thread.views = thread.views + aggregate.views
May 29 09:56:08 http2 mysqld[1380]: ^IWHERE thread.threadid = aggregate.threadid', Error_code: 1146
May 29 09:56:08 http2 mysqld[1380]: 080529 9:56:08 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'mysql-bin.001079' position 203015142
server1:/home/admin#

You can see what query caused the error, and at what log position the replication stopped.

To verify that the replication is really not working, log in to MySQL:

mysql -u root -p

On the MySQL shell, run:

mysql> SHOW SLAVE STATUS \G

If one of Slave_IO_Running or Slave_SQL_Running is set to No, then the replication is broken:

mysql> SHOW SLAVE STATUS \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 1.2.3.4
Master_User: slave_user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.001079
Read_Master_Log_Pos: 269214454
Relay_Log_File: slave-relay.000130
Relay_Log_Pos: 100125935
Relay_Master_Log_File: mysql-bin.001079
Slave_IO_Running: Yes
Slave_SQL_Running: No
Replicate_Do_DB: mydb
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 1146
Last_Error: Error 'Table 'mydb.taggregate_temp_1212047760' doesn't exist' on query. Default database: 'mydb'.
Query: 'UPDATE thread AS thread,taggregate_temp_1212047760 AS aggregate
SET thread.views = thread.views + aggregate.views
WHERE thread.threadid = aggregate.threadid'
Skip_Counter: 0
Exec_Master_Log_Pos: 203015142
Relay_Log_Space: 166325247
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
1 row in set (0.00 sec)

mysql>


2 Repairing The Replication

Just to go sure, we stop the slave:

mysql> STOP SLAVE;

Fixing the problem is actually quite easy. We tell the slave to simply skip the invalid SQL query:

mysql> SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;

This tells the slave to skip one query (which is the invalid one that caused the replication to stop). If you'd like to skip two queries, you'd use SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 2; instead and so on.

That's it already. Now we can start the slave again...

mysql> START SLAVE;

... and check if replication is working again:

mysql> SHOW SLAVE STATUS \G

mysql> SHOW SLAVE STATUS \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 1.2.3.4
Master_User: slave_user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.001079
Read_Master_Log_Pos: 447560366
Relay_Log_File: slave-relay.000130
Relay_Log_Pos: 225644062
Relay_Master_Log_File: mysql-bin.001079
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB: mydb
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 447560366
Relay_Log_Space: 225644062
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
1 row in set (0.00 sec)

mysql>

As you see, both Slave_IO_Running and Slave_SQL_Running are set to Yes now.

Now leave the MySQL shell...

mysql> quit;

... and check the log again:

grep mysql /var/log/syslog

server1:/home/admin# grep mysql /var/log/syslog
May 29 09:56:08 http2 mysqld[1380]: 080529 9:56:08 [ERROR] Slave: Error 'Table 'mydb.taggregate_temp_1212047760' doesn't exist' on query. Default database: 'mydb'. Query: 'UPDATE thread AS thread,taggregate_temp_1212047760 AS aggregate
May 29 09:56:08 http2 mysqld[1380]: ^ISET thread.views = thread.views + aggregate.views
May 29 09:56:08 http2 mysqld[1380]: ^IWHERE thread.threadid = aggregate.threadid', Error_code: 1146
May 29 09:56:08 http2 mysqld[1380]: 080529 9:56:08 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'mysql-bin.001079' position 203015142
May 29 11:42:13 http2 mysqld[1380]: 080529 11:42:13 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.001079' at position 203015142, relay log '/var/lib/mysql/slave-relay.000130' position: 100125935
server1:/home/admin#

Continue reading How To Repair MySQL Replication
,

Tuning MySQL Performance with MySQLTuner

MySQLTuner is a Perl script that analyzes your MySQL performance and, based on the statistics it gathers, gives recommendations which variables you should adjust in order to increase performance. That way, you can tune your my.cnf file to tease out the last bit of performance from your MySQL server and make it work more efficiently.

This document comes without warranty of any kind! I do not issue any guarantee that this will work for you!

1 Using MySQLTuner

You can download the MySQLTuner script as follows:

wget http://mysqltuner.com/mysqltuner.pl

In order to run it, we must make it executable:

chmod +x mysqltuner.pl

Afterwards, we can run it. You need your MySQL root password for it:

./mysqltuner.pl

server1:~# ./mysqltuner.pl

>> MySQLTuner 0.9.8 - Major Hayden
>> Bug reports, feature requests, and downloads at http://mysqltuner.com/
>> Run with '--help' for additional options and output filtering
Please enter your MySQL administrative login:
<-- root
Please enter your MySQL administrative password: <-- yourrootsqlpassword

-------- General Statistics --------------------------------------------------
[--] Skipped version check for MySQLTuner script
[!!] Your MySQL version 4.1.11-Debian_etch1-log is EOL software! Upgrade soon!
[OK] Operating on 32-bit architecture with less than 2GB RAM

-------- Storage Engine Statistics -------------------------------------------
[--] Status: +Archive -BDB -Federated +InnoDB +ISAM -NDBCluster
[--] Data in MyISAM tables: 301M (Tables: 2074)
[--] Data in HEAP tables: 379K (Tables: 9)
[!!] InnoDB is enabled but isn't being used
[!!] ISAM is enabled but isn't being used
[!!] Total fragmented tables: 215

-------- Performance Metrics -------------------------------------------------
[--] Up for: 12d 18h 33m 30s (1B q [1K qps], 185K conn, TX: 3B, RX: 377M)
[--] Reads / Writes: 78% / 22%
[--] Total buffers: 2.6M per thread and 58.0M global
[OK] Maximum possible memory usage: 320.5M (20% of installed RAM)
[OK] Slow queries: 0% (17/1B)
[OK] Highest usage of available connections: 32% (32/100)
[OK] Key buffer size / total MyISAM indexes: 16.0M/72.3M
[OK] Key buffer hit rate: 99.9%
[OK] Query cache efficiency: 99.9%
[!!] Query cache prunes per day: 47549
[OK] Sorts requiring temporary tables: 0%
[!!] Temporary tables created on disk: 28%
[OK] Thread cache hit rate: 99%
[!!] Table cache hit rate: 0%
[OK] Open file limit used: 12%
[OK] Table locks acquired immediately: 99%
[!!] Connections aborted: 20%

-------- Recommendations -----------------------------------------------------
General recommendations:
Add skip-innodb to MySQL configuration to disable InnoDB
Add skip-isam to MySQL configuration to disable ISAM
Run OPTIMIZE TABLE to defragment tables for better performance
Enable the slow query log to troubleshoot bad queries
When making adjustments, make tmp_table_size/max_heap_table_size equal
Reduce your SELECT DISTINCT queries without LIMIT clauses
Increase table_cache gradually to avoid file descriptor limits
Your applications are not closing MySQL connections properly
Variables to adjust:
query_cache_size (> 16M)
tmp_table_size (> 32M)
max_heap_table_size (> 16M)
table_cache (> 64)

server1:~#


You should carefully read the output, especially the recommendations at the end. It shows exactly which variables you should adjust in the [mysqld] section of your my.cnf (on Debian and Ubuntu the full path is /etc/mysql/my.cnf). Whenever you change your my.cnf, make sure that you restart MySQL. You can then run MySQLTuner again to see if it has further recommendations to improve the MySQL performance. This way, you can optimize MySQL step by step.

Continue reading Tuning MySQL Performance with MySQLTuner
, ,

Deny or allow countries with apache .htaccess

Introduction

The following script is using blogama.org IP geolocation API to automatically generate Apache .htaccess file to deny or allow specific countries. You can put this script under crontab and the .htaccess rules will be automatically updated. Also, it can update multiple .htaccess files.

Deny or allow?

First you need to understand the meaning of these two rules in the .htaccess file. If you set "deny" in the script for countries "US,CA" (USA and Canada), all traffic from USA or Canada will be blocked. On the other hand, if you set "allow" it will only accept traffic from these two countries, all others being blocked.

Countries code

You need to know the ISO country code you want to deny/allow. The list is available here.

Usage without the automated script

Where country is the list or countries, with a comma between them and output is either htaccess_deny or htaccess_allow.

How is the script working?

You will have to create a text file with all .htaccess files (with complete path) you wish to update with the script. If you have other information in your .htaccess files they will still remain there, the script will only update the portion between the tags "#COUNTRY_BLOCK_START" and "#COUNTRY_BLOCK_END".

Before you start with the script

Create a text file named htaccessfile.txt (in the WORKDIR of the script, see below). In that file, put all (existing!) .htaccess files you wish to update. For example:

/var/www/example.com/.htaccess
/var/www/mydomain.com/.htaccess

Script configuration

On top of the script, you will find this section. You need to modify these variables if needed:

###MODIFY THIS SECTION###
WORKDIR="/root/"
HTACCESSFILE="htaccessfile.txt"
HTACCESSBLOCK="htaccess-blocklist.txt"
TEMPFILE="htaccess.temp"
COUNTRIES="US,CA"
TYPE="allow"
#########################

WORKDIR: is a writable directory where the script will be located.
HTACCESSFILE: is the file where you will put your .htaccess paths.
HTACCESSBLOCK and TEMPFILE: are temporary file that will be deleted at the end of the script execution.
COUNTRIES: is the list of countries you wish to deny/allow, separated with a comma.
TYPE: "allow" or "deny" access to these countries.

The script

#!/bin/bash
###BLOGAMA.ORG###
###MODIFY THIS SECTION###
WORKDIR="/root/"
HTACCESSFILE="htaccessfile.txt"
HTACCESSBLOCK="htaccess-blocklist.txt"
TEMPFILE="htaccess.temp"
COUNTRIES="US,CA"
TYPE="deny"

#########################

#####DO NOT MAKE MODIFICATIONS BELOW#####

cd $WORKDIR
#Get the file from blogama.org API
wget -c --output-document=$HTACCESSBLOCK "http://blogama.org/country_query.php?country=$COUNTRIES&output=htaccess_$TYPE"
for i in $( cat $WORKDIR$HTACCESSFILE ); do
if [ -f $i ]; then
cat $i 2>&1 | grep "COUNTRY_BLOCK_START"
if [ "$?" -ne "1" ]; then #ALREADY IN HTACCESS
sed '/#COUNTRY_BLOCK_START/,/#COUNTRY_BLOCK_END/d' $i > $WORKDIR$TEMPFILE
cat $WORKDIR$HTACCESSBLOCK >> $WORKDIR$TEMPFILE
mv $WORKDIR$TEMPFILE $i
else #NOT IN HTACCESS
cat $WORKDIR$HTACCESSBLOCK >> $i
fi
fi
done
rm -f $WORKDIR$HTACCESSBLOCK
Make it executable:

chmod +x whatever_you_called_this_script

Add it to your crontab:

* * * * * /path/to/whatever_you_called_this_script >/dev/null 2>&1

Note: Use this script at your own risk.
Continue reading Deny or allow countries with apache .htaccess

Set Date Time in Cisco Router

This intruction of set date time on Cisco Router, use it for NTP server and other router use it as NTP peer.

Set date time
R#clock ?
set Set the time and date

R#clock set ?
hh:mm:ss Current Time

R#clock set 11:20:00 ?
<1-31> Day of the month
MONTH Month of the year

R#clock set 11:20:00 26?
<1-31>

R#clock set 11:20:00 26 ?
MONTH Month of the year

R#clock set 11:20:00 26 SEPT ?
<1993-2035> Year

R#clock set 11:20:00 26 SEPT 2007

R#sh clock
11:20:03.415 UTC Wed Sep 26 2007

R(config)#clock timezone GMT 7

R(config)#do sh clock
11:21:11.979 GMT Wed Sep 26 2007

R(config)#ntp clock-period 17179574

R(config)#ntp master

R(config)#do sh clock
11:31:30.675 GMT Wed Sep 26 2007

NTP peer
RX(config)#ntp clock-period 17179576

RX(config)#ntp peer 1.2.3.4

RX(config)#do sh clock
.04:31:33.639 UTC Wed Sep 26 2007
Continue reading Set Date Time in Cisco Router
,

A Sample syslog-ng conf file

################################################################################
# Syslog-ng configuration file. Revised, and rewrited by me (Shahjahan Siraj )
#
###############################################################
# First, set some global options.

options {
# use_fqdn(yes);
# use_dns(yes);
# dns_cache(yes);
keep_hostname(yes);
long_hostnames(off);
sync(1);
log_fifo_size(1024);
};

###############################################################
#
# This is the default behavior of sysklogd package
# Logs may come from unix stream, but not from another machine.
#
#source src { unix-stream("/dev/log"); internal(); };


source src {
# don't read from /proc/kmsg and run klogd also (Linux)
pipe("/proc/kmsg");
# file("/proc/kmsg") log_prefix("kernel: ");
unix-stream("/dev/log");
# unix-stream("/chroot/named/dev/log");
internal();
udp();
# udp(ip("10.0.5.8") port(514));
tcp(port(5140) keep-alive(yes));
# tcp(ip("10.9.9.3") port(5140) keep-alive(yes));
};


###############################################################
# After that set destinations.

# First some standard logfile
#
destination authlog { file("/var/log/auth.log"); };
destination syslog { file("/var/log/syslog"); };
destination cron { file("/var/log/cron.log"); };
destination daemon { file("/var/log/daemon.log"); };
destination kern { file("/var/log/kern.log"); };
destination lpr { file("/var/log/lpr.log"); };
destination user { file("/var/log/user.log"); };
destination uucp { file("/var/log/uucp.log"); };


# This files are the log come from the mail subsystem.
#
destination mail { file("/var/log/mail.log"); };
destination maillog { file("/var/log/maillog"); };
destination mailinfo { file("/var/log/mail.info"); };
destination mailwarn { file("/var/log/mail.warn"); };
destination mailerr { file("/var/log/mail.err"); };

# Logging for INN news system
#
destination newscrit { file("/var/log/news/news.crit"); };
destination newserr { file("/var/log/news/news.err"); };
destination newsnotice { file("/var/log/news/news.notice"); };

# Some `catch-all' logfiles.
#
destination debug { file("/var/log/debug"); };
destination messages { file("/var/log/messages"); };

# The root's console.
#
destination console { usertty("root"); };

# Virtual console.
#
destination console_all { file("/dev/tty8"); };

# The named pipe /dev/xconsole is for the nsole' utility. To use it,
# you must invoke nsole' with the -file' option:
#
# $ xconsole -file /dev/xconsole [...]
#
destination xconsole { pipe("/dev/xconsole"); };

#
# scripts that accept syslog messages and mail them out
#
destination mail-alert { program("/usr/local/bin/syslog-mail"); };
destination mail-alert-perl { program("/usr/local/bin/syslog-mail-perl"); };

#
# hack to get swatch to read from stdin
#
destination swatch { program("/usr/bin/swatch --read-pipe=\"cat /dev/fd/0\""); };

#
# destination database
#
destination sqlsyslogd {
program("/usr/local/sbin/sqlsyslogd -u sqlsyslogd -t logs sqlsyslogd -p");
};

##########################################

# Here's the filter options. With this rules, we can set which
# message go where.

filter f_attack_alert {
match("attackalert");
};

filter f_ssh_login_attempt {
program("sshd.*")
and match("(Failed|Accepted)")
and not match("Accepted (hostbased|publickey) for (root|zoneaxfr) from (10.4.3.1)");
};

filter f_authpriv { facility(auth, authpriv); };
filter f_syslog { not facility(auth, authpriv) and not facility(mail); };
filter f_cron { facility(cron); };
filter f_daemon { facility(daemon); };
filter f_kern { facility(kern); };
filter f_lpr { facility(lpr); };
filter f_mail { facility(mail); };
filter f_user { facility(user); };
filter f_uucp { facility(cron); };

filter f_news { facility(news); };

filter f_debug { not facility(auth, authpriv, news, mail); };
filter f_messages { level(info .. warn)
and not facility(auth, authpriv, cron, daemon, mail, news); };
filter f_emergency { level(emerg); };

filter f_info { level(info); };
filter f_notice { level(notice); };
filter f_warn { level(warn); };
filter f_crit { level(crit); };
filter f_err { level(err); };

filter f_cnews { level(notice, err, crit) and facility(news); };
filter f_cother { level(debug, info, notice, warn) or facility(daemon, mail); };

###############################################################
#
# log statements actually send logs somewhere, to a file, across the network, etc
#

#log { source(src); filter(f_authpriv); destination(authlog); };
log { source(src); filter(f_authpriv); destination(syslog); };
log { source(src); filter(f_syslog); destination(syslog); };
#log { source(src); filter(f_cron); destination(cron); };
#log { source(src); filter(f_daemon); destination(daemon); };
log { source(src); filter(f_daemon); destination(messages); };
#log { source(src); filter(f_kern); destination(kern); };
log { source(src); filter(f_kern); destination(messages); };
log { source(src); filter(f_lpr); destination(lpr); };
log { source(src); filter(f_mail); destination(mail); };
#log { source(src); filter(f_user); destination(user); };
log { source(src); filter(f_user); destination(messages); };
log { source(src); filter(f_uucp); destination(uucp); };
log { source(src); filter(f_mail); destination(maillog); };
log { source(src); filter(f_mail); filter(f_info); destination(mailinfo); };
log { source(src); filter(f_mail); filter(f_warn); destination(mailwarn); };
log { source(src); filter(f_mail); filter(f_err); destination(mailerr); };
log { source(src); filter(f_news); filter(f_crit); destination(newscrit); };
log { source(src); filter(f_news); filter(f_err); destination(newserr); };
log { source(src); filter(f_news); filter(f_notice); destination(newsnotice); };
#log { source(src); filter(f_debug); destination(debug); };
log { source(src); filter(f_messages); destination(messages); };
log { source(src); filter(f_emergency); destination(console); };

#log { source(src); filter(f_cnews); destination(console_all); };
#log { source(src); filter(f_cother); destination(console_all); };

log { source(src); filter(f_cnews); destination(xconsole); };
log { source(src); filter(f_cother); destination(xconsole); };


# slap it into a MySQL database
log {
source(src);
destination(sqlsyslogd);
};

# find messages with "attackalert" in them, and send to the mail-alert script
log {
source(src);
filter(f_attack_alert);
destination(mail-alert-perl);
};

# find messages reporting attempted ssh logins, and send to the mail-alert script
log {
source(src);
filter(f_ssh_login_attempt);
destination(mail-alert-perl);
};

# send all logs to swatch for (near) real-time alerts
log {
source(src);
destination(swatch);
};


## set up logging to loghost
#destination loghost {
# tcp("10.0.0.1" port(514));
#};

# send everything to loghost, too
#log {
# source(src);
# destination(loghost);
#};

#
# automatic host sorting (usually used on a loghost)
#

# set it up
destination std {
file("/var/log/HOSTS/$HOST/$YEAR/$MONTH/$DAY/$FACILITY_$HOST_$YEAR_$MONTH_$DAY"
owner(root) group(root) perm(0600) dir_perm(0700) create_dirs(yes)
);
};

# log it
log {
source(src);
destination(std);
};

###############################################################
Continue reading A Sample syslog-ng conf file
,

Logging with syslog-ng

Managing Linux system and application logging is important and a bit tricky. You want to capture important information, not bales of noise. You need to be able to find what you want in your logs without making it your life's work. The venerable old syslogd has served nobly for many years, but it's not quite up to meeting more complex needs. For this we have the next-generation Linux logger, syslog-ng.

Syslog-ng has a number of advantages over syslogd: better networking support, highly-configurable filters, centralized network logging, and lots more flexibility. For example, with syslogd all iptables messages get dumped in kern.log along with all the other kernel messages. Syslog-ng lets you direct iptables messages to a separate logfile. Syslogd uses only UDP; syslog-ng runs over UDP and TCP, so you can run it over encrypted network tunnels to a central logging server.

Installation
Syslog-ng is a standard package on all Linuxes. The current stable release is 1.6. If you are using apt-get or yum sysklogd should be removed automatically. But if it isn't, make sure to remove it and to stop the syslogd daemon. (sysklogd is the package name, syslogd is the daemon.)

Configuration File Structure
With all of this flexibility comes a bit of a learning curve. Syslog-ng's configuration file is /etc/syslog-ng/syslog-ng.conf, or on some systems /etc/syslog-ng.conf. You'll need man 5 syslog-ng.conf to understand all the different options and parameters. syslog-ng.conf has five possible sections:

options{}
Global options. These can be overridden in any of the next four sections

source{}
Message sources, such as files, local sockets, or remote hosts

destination{}
Message destinations, such as files, local sockets, or remote hosts

filter{}
Filters are powerful and flexible; you can filter on any aspect of a log message, such as standard syslogd facility names (man 5 syslog.conf), log level, hostname, and arbitrary contents like words or number strings

log{}
Log statements connect the source, destination, and filter statements, and tell syslog-ng what to do with them

Here are some typical global options:

options {

sync (0);
log_fifo_size (2048);
create_dirs (yes);
group (logs);
dir_group (logs);
perm (0640);
dir_perm (0750);
};

Options statements must use options as defined in /etc/syslog-ng.conf. This what the examples mean:

sync
How many lines of messages to keep in the output queue before writing to disk. (Logging output is called "messages.") Zero is the preferred option, since you want to be sure to capture everything, and not risk losing data in the event of a power failure

log_fifo_size
The maximum number of lines of messages in the output queue. The default is 100 lines of messages. You can do some calculations to figure out a suitable value, as this syslog-ng mail list post shows. To quote the relevant bit:

Each message source receives maximum 1024 bytes of data. A single log message is about 20 to 30 bytes. So on a single run each log source may emit 1024/20 = 50 messages. Your log fifo must be able to hold this number of messages for each source. So for two sources, you'll need at least 100 slots in your fifo.

It is easy to get confused by these two options. Incoming messages from remote hosts arrive in bursts, so what you need to do is make sure your log_fifo_size is large enough to handle these bursts. Otherwise you risk losing messages. Ultimately you will be limited by i/o and network speeds

create_dirs
Enable or disable automatic directory creation for destination files. In this article this value is "yes", so that remote host directories can be created on demand

group
dir_group
Set the group owner of logfiles and directories, so you don't have to read and analyze logs as the superuser

perm
dir_perm
Default permissions on new files and directories

Source, Destination, and Filter Statements
Source, destination, and filter names are arbitrary, like these examples show:

source s_local { internal(); unix-stream("/dev/log"); file("/proc/kmsg" log_prefix("kernel: ")); };
destination d_auth { file("/var/log/auth.log"); };
filter f_auth { facility(auth, authpriv); };

So "source s_local" could be "source frederick_remington_depew" if you so desired, and "destination d_auth" could be "destination moon." The convention is to prefix source names with "s", destination names with "d", and filter names with "f", but this is not required. All other elements of source, destination, and filter statements must use parameters as defined in man 5 syslog-ng.conf.

The "source s_local" example collects all locally-generated log messages into a single source statement. The "destination d_auth" statement directs authorization messages to /var/log/auth.log, which are defined by the "filter f_auth" statement. auth and authpriv are standard syslog facility names.

Log statements pull it all together:

log {source(s_local); filter(f_auth); destination(d_auth); };

So these four lines filter out all the auth and authpriv messages from all local messages, and write them to /var/log/auth.log.

Enabling Remote Logging
While it's possible to send log messages from remote clients with good old syslogd, it's really not adequate because it only transmits UDP packets. So you need syslog-ng installed on all client hosts as well. Adding these lines to syslog-ng.conf on the server accepts remote messages from clients and dumps them into a single file per host:

source s_remote { tcp(); };
destination d_clients { file("/var/log/HOSTS/$HOST"); };
log { source(s_remote); destination(d_clients); };

This is a very simple, but functional example for your client hosts that collects all local messages and sends them to the remote server:

#sample syslog-ng.conf for a remote client
source s_local { internal(); unix-stream("/dev/log"); file("/proc/kmsg" log_prefix("kernel: ")); };
destination d_loghost {tcp("192.168.1.10" port(514));};
log { source(s_local); destination(loghost); };

Sample syslog-ng.conf
A complete sample configuration file is a bit long to include here, so you should take a look at the one that came with your syslog-ng installation. Debian users get a customized file that replicates a syslogd setup. Let's put together our examples here in a single file for the server, set up a remote client, and run some tests to see how it works:

#sample syslog-ng.conf for a central logging server
options {

sync (0);
log_fifo_size (2048);
create_dirs (yes);
group (logs);
dir_group (logs);
perm (0640);
dir_perm (0750);
};

source s_local { internal(); unix-stream("/dev/log"); file("/proc/kmsg" log_prefix("kernel: ")); };
destination d_auth { file("/var/log/auth.log"); };
filter f_auth { facility(auth, authpriv); };

source s_remote { tcp(); };
destination d_clients { file("/var/log/HOSTS/$HOST"); };

log { source(s_remote); destination(d_clients); };
log { source(s_local); filter(f_auth); destination(d_auth); };

Whenever you make changes to syslog-ng.conf you must restart it:

# /etc/init.d/syslog-ng restart

Testing Everything
Now you can runs some simple tests on both the server and the client. Since the only local server messages that are going to be logged are authorization messages, you can test these by opening a new login session on the server, or running su or sudo. Then check /var/log/auth.log. Test the client by doing anything, then see if a new directory was created for the remote client in /var/log/HOSTS.

Another way is to use the useful and excellent logger command:

# logger "this is a test"
# logger -p auth.debug "this is a test"

This will create a line like this in your logfiles:

Apr 1 16:08:42 localhost.localdomain logger: this is a test

Now that we have a grasp of syslog-ng basics, come back next week to learn how to fine-tune and organize syslog-ng just the way you like, for both local and remote logging, and how to securely encrypt all syslog-ng network transmissions.

Continue reading Logging with syslog-ng