If it's hard to test, it's not there yet.
#!/bin/bashlogfile=$1if [ ! -f $logfile ]; then echo "log file not found $logfile" exit 1fitimestamp=`date +%Y%m%d`newlogfile=$logfile.$timestampcp $logfile $newlogfilecat /dev/null > $logfilegzip -f -9 $newlogfile
Nice, but warning if u have very lot of logfile... u can check and delete the old logfile like that:find "path" -name "logfilename" -type f -mtime +90 | xargs rm -ffor example.
you can do lot of thing with logrotate tool, /etc/logrotate.d.Whatever script you have written should run daily/weekly/monthly or yearly, so you should use cron job for that. That's not consistent from usability perspective. In enterprise people use logrotate tool which comes with SLES,(i have seen some people use SLF4J too -but it's specific to application log)
You should probably look at the logrotate package present on most linux distributions.Also, unless the application itself releases the file by closing it, it'll continue to write to the inode on a lot of operating systems until it stops(consuming disk space phantomly).
Thanks for the tip on logrotate. The script doesn't delete the current log file, only truncate it so I don't think it'll lead to phantom inode.
Some of you may find this variation of the same script useful:I have a lot of different log files on the same directory, this script looks into a directory to find .log files and if they are bigger than 512Kb then rotate them. Additionally I delete compressed log files older than 90 days#!/bin/bashtimestamp=`date +%Y%m%d`LOGDIR=/backup/shells/logfind $LOGDIR -name '*.log' -type f -size +512k | while read logfiledo # the parsing of a line echo $logfile newlogfile=$logfile.$timestamp cp $logfile $newlogfile cat /dev/null > $logfile gzip -f -9 $newlogfiledone#Delete logs older than 90 days:find $LOGDIR -name '*.gz' -type f -mtime 90 |xargs rm -f
works, great, regards
Post a Comment