Archive

Archive for the ‘bash’ Category

How to compile mydumper 0.5.2 on Debian 6.0.8 and MySQL Percona Server 5.5.34-rel32.0-591.squeeze

December 8, 2013 1 comment

How to start :
mydumper home page : https://launchpad.net/mydumper
how to build on Debian/Ubuntu : https://answers.launchpad.net/mydumper/+faq/349 , but there are some things missing like cmake and libpcre
Latest version is 0.5.2 released on 2012-11-19: https://launchpad.net/mydumper/0.5/0.5.2/+download/mydumper-0.5.2.tar.gz

MySQL installed version : percona-server-server-5.5 : 5.5.34-rel32.0-591.squeeze

1. get the source :

test ! -d ~/installs/ && mkdir -p ~/installs/
cd ~/installs/
wget https://launchpad.net/mydumper/0.5/0.5.2/+download/mydumper-0.5.2.tar.gz
tar xvf mydumper-0.5.2.tar.gz

2. install the dev packages :

sudo apt-get install libglib2.0-dev zlib1g-dev

3. Install cmake

apt-get install cmake

4. Install percoan mysql client dev files:

apt-get install libmysqlclient-dev

5. Build it :

cd ~/installs/mydumper-0.5.2
cmake . -DCMAKE_INSTALL_PREFIX=~/bin/mydumper

in case you get PCRE not found error:
6. Install the PCRE libs:

apt-get install libpcre3-dev

In case you get no cmake errors, execute make:

make

in case you get that error :

 [ 20%] Building C object CMakeFiles/mydumper.dir/binlog.c.o
 [ 40%] Building C object CMakeFiles/mydumper.dir/server_detect.c.o
 [ 60%] Building C object CMakeFiles/mydumper.dir/g_unix_signal.c.o
 make[2]: *** No rule to make target `/usr/lib/libmysqlclient_r.so', needed by `mydumper'.  Stop.
 make[1]: *** [CMakeFiles/mydumper.dir/all] Error 2
 make: *** [all] Error 2

you need to fix the /usr/lib/libmysqlclient_r.so, it seems to point to non existing libmysqlclient_r.so.18  in my case

ls -lrth /usr/lib/libmysqlclient_r.so
lrwxrwxrwx 1 root root 22 Nov 18 07:24 /usr/lib/libmysqlclient_r.so -> libmysqlclient_r.so.18
cd /usr/lib/
rm libmysqlclient_r.so &&  ln -s libmysqlclient.so.18 libmysqlclient_r.so
root@www:[Sun Dec 08 22:01:19][/usr/lib]$ ls -lrt  | grep  libmysqlclient.so.18
-rw-r--r--  1 root root 3162144 Oct 25 08:35 libmysqlclient.so.18.0.0
-rw-r--r--  1 root root 3551104 Oct 25 09:04 libmysqlclient.so.18.1.0
lrwxrwxrwx  1 root root      24 Nov 18 07:20 libmysqlclient_r.so.18.0.0 -> libmysqlclient.so.18.0.0
lrwxrwxrwx  1 root root      24 Nov 18 07:24 libmysqlclient_r.so.18.1.0 -> libmysqlclient.so.18.1.0
lrwxrwxrwx  1 root root      20 Nov 18 07:24 libmysqlclient.so -> libmysqlclient.so.18
lrwxrwxrwx  1 root root      26 Nov 18 07:24 libmysqlclient.so.18 -> libmysqlclient_r.so.18.1.0
lrwxrwxrwx  1 root root      20 Dec  7 20:41 libmysqlclient_r.so -> libmysqlclient.so.18

Then try again make
in case you get that error :

sql_common.h:26:18: fatal error: hash.h: No such file or directory

this is due to MySQL bug : #70672
in a view to fix that, get the percona source :

$ cd ~/installs/
 wget  http://www.percona.com/redir/downloads/Percona-Server-5.5/LATEST/source/Percona-Server-5.5.34-rel32.0.tar.gz
 tar xvf Percona-Server-5.5.34-rel32.0.tar.gz
 sudo cp ./Percona-Server-5.5.34-rel32.0/include/hash.h /usr/include/
 then again try to build the mydymper:
 cd ~/installs/mydumper-0.5.2
 make
 Scanning dependencies of target mydumper
 [ 20%] Building C object CMakeFiles/mydumper.dir/mydumper.c.o
 [ 40%] Building C object CMakeFiles/mydumper.dir/binlog.c.o
 [ 60%] Building C object CMakeFiles/mydumper.dir/server_detect.c.o
 [ 80%] Building C object CMakeFiles/mydumper.dir/g_unix_signal.c.o
 Linking C executable mydumper
 [ 80%] Built target mydumper
 Scanning dependencies of target myloader
 [100%] Building C object CMakeFiles/myloader.dir/myloader.c.o
 Linking C executable myloader
 [100%] Built target myloader
 make install
 [ 80%] Built target mydumper
 [100%] Built target myloader
 Install the project...
 -- Install configuration: ""
 -- Installing: /home/seik/bin/mydumper/bin/mydumper
 -- Installing: /home/seik/bin/mydumper/bin/myloader
 export PATH=$PATH:/home/seik/bin/mydumper/bin/
 myloader --version
 myloader 0.5.2, built against MySQL 5.6.14

seems to work :
so how to backup the database mysql:

mkdir /home/.mydumper
mydumper -o /home/.mydumper -r 100000 -c -e -m -L mysql-backup.log -u root -p mypass -h localhost -t 2 -v 3 -B mysql 
mysqldump -u root -p -d -R --skip-triggers mysql  > /home/.mydumper/mysql.schema
mysqldump -u root -p -d -t  mysql  > mysql.triggers
seik@www:[Sun Dec 08 22:28:35][/home/.mydumper]$ ls -lrth
total 260K
-rw-r--r-- 1 seik seik  250 Dec  8 04:02 mysql.db.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.columns_priv.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.event.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.general_log.sql.gz
-rw-r--r-- 1 seik seik  166 Dec  8 04:02 mysql.func.sql.gz
-rw-r--r-- 1 seik seik  614 Dec  8 04:02 mysql.help_category.sql.gz
-rw-r--r-- 1 seik seik 3.4K Dec  8 04:02 mysql.help_relation.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.plugin.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.ndb_binlog_index.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.host.sql.gz
-rw-r--r-- 1 seik seik 3.3K Dec  8 04:02 mysql.help_keyword.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.procs_priv.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.proc.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.servers.sql.gz
-rw-r--r-- 1 seik seik  173 Dec  8 04:02 mysql.proxies_priv.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.time_zone_transition_type.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.time_zone_transition.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.time_zone_name.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.time_zone_leap_second.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.time_zone.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.tables_priv.sql.gz
-rw-r--r-- 1 seik seik   78 Dec  8 04:02 mysql.slow_log.sql.gz
-rw-r--r-- 1 seik seik  539 Dec  8 04:02 mysql.user.sql.gz
-rw-r--r-- 1 seik seik 128K Dec  8 04:02 mysql.help_topic.sql.gz
-rw-r--r-- 1 seik seik   75 Dec  8 04:02 metadata
-rw-r--r-- 1 seik seik  25K Dec  8 04:08 mysql.schema
-rw-r--r-- 1 seik seik 1.3K Dec  8 04:08 mysql.triggers

Slackware4Life ! 🙂

How to store MySQL innobackupex backups at Google Cloud Storage

November 26, 2013 Leave a comment

In general, I chose Google Cloud Storage to store web sites MySQL backups due to its price and speed of upload/download in real time

I used the Google native tool – gsutil , innobackupex and some bash

in short : the /etc and local MySQL  backup

#!/bin/sh
# Barcelona Tue Nov 22 17 16:30:36 CEST 2013

days_to_keep=3
NFS=/home/mysql.backups/
exportDate=`date +%Y-%m-%d.%H.%M.%S`
export_DIR=${NFS}/${HOSTNAME}.${exportDate}
test ! -d "${export_DIR}" && echo "$(date) : creating ${export_DIR}" && mkdir -p "${export_DIR}"
export_MySQL_DIR=${export_DIR}/mysql.bckp
export_ETC_DIR=${export_DIR}/etc.bckp
# backup the /etc directory
rsync -avh /etc ${export_ETC_DIR}
echo "=========================================================================================================" >> ${export_DIR}/README.restore.with.innobackupex
echo "HOW to restore this FULL mysql backup" >> ${export_DIR}/README.restore.with.innobackupex
echo "=========================================================================================================" >> ${export_DIR}/README.restore.with.innobackupex
echo "service stop mysql" >> ${export_DIR}/README.restore.with.innobackupex
echo "ps aux | grep mysql" >> ${export_DIR}/README.restore.with.innobackupex
echo "rsync -avh /var/lib/mysql /var/lib/mysql.BAD" >> ${export_DIR}/README.restore.with.innobackupex
echo "rm -rf /var/lib/mysql" >> ${export_DIR}/README.restore.with.innobackupex
echo "mkdir -p /var/lib/mysql && chown -R mysql:mysql /var/lib/mysql" >> ${export_DIR}/README.restore.with.innobackupex
echo "innobackupex --copy-back ${export_MySQL_DIR}" >> ${export_DIR}/README.restore.with.innobackupex
echo "=========================================================================================================" >> ${export_DIR}/README.restore.with.innobackupex
echo "more info at http://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/restoring_a_backup_ibk.html:" >> ${export_DIR}/README.restore.with.innobackupex
#cat /root/bin/README.restore.with.innobackupex >> ${export_DIR}/README.restore.with.innobackupex
innobackupex --ibbackup=xtrabackup --no-timestamp ${export_MySQL_DIR}
test $? -gt 0 && echo "$(date) : xtrabackup failed at ${export_MySQL_DIR}" && exit 0
innobackupex --apply-log ${export_MySQL_DIR}
find ${NFS}/ -daystart -maxdepth 1 -ctime +${days_to_keep} -type d -delete
find ${NFS}/ -daystart -maxdepth 1 -ctime +${days_to_keep} -type f -delete
nice tar cvf ${export_DIR}.tar.bz2 ${export_DIR}
test $? -eq 0 && chown seik:seik ${export_DIR}.tar.bz2 && chmod 0700 ${export_DIR}.tar.bz2 && rm -rf ${export_DIR}
chown -R seik:seik "${NFS}"

The script uploading the backups at Google cloud storage, it maintains the latest 5 backups:

#!/bin/bash
backupDir="/home/mysql.backups"
export PATH=${PATH}:$HOME/gsutil
gsUrl="gs://gsutil-test-test_default_cors-bucket-xxxxxx"
remoteBckpDir="tobedone.es"
for backup in `ls ${backupDir}`
do
 echo "$(date) : checking if ${backup} is stored"
 gsutil ls ${gsUrl}/${remoteBckpDir}/${backup} > /dev/null 2>&1
 if [ $? -gt 0 ]
 then
 echo "$(date) : ${backupDir}/${backup} is not stored, initiating upload"
 gsutil cp -R ${backupDir}/${backup} ${gsUrl}/${remoteBckpDir}/
 test $? -eq 0 && echo "$(date) : ${backup} is stored, deleting the local one" && rm ${backupDir}/${backup}
 else
 echo "$(date) : ${backup} is stored"
 test -f ${backupDir}/${backup} && echo "$(date) : ${backup} is stored, deleting the local one" && rm ${backupDir}/${backup}
 fi
done
# do some clanup at google storage
gsutil ls -lrh ${gsUrl}/${remoteBckpDir}/ | sed '2,$!d;$d' | sort -r -k 3.12 | awk '{print $4}' | sed '6,$!d' | xargs -icrap gsutil rm crap

Daily backup OpenStack single MySQL with Percona innobackupex including the /etc directory

October 17, 2013 1 comment

this is a short script for daily backup of the OpenStack MySQL and the /etc direcotry of the control node

[root@dev-epg-rhos-01 BACKUP]# cat /root/bin/epg.innobackupex.openstack.sh
#!/bin/sh
# done for epgmad4@tid.es
# Barcelona Thu Oct 17 16:30:36 CEST 2013

days_to_keep=7
NFS=/BACKUP
exportDate=`date +%Y-%m-%d.%H.%M.%S`
export_DIR=${NFS}/${HOSTNAME}.${exportDate}
test ! -d "${export_DIR}" && echo "$(date) : creating ${export_DIR}" && mkdir -p "${export_DIR}"
export_MySQL_DIR=${export_DIR}/mysql.bckp
export_ETC_DIR=${export_DIR}/etc.bckp
rsync -avh /etc ${export_ETC_DIR}
echo "=========================================================================================================" >> ${export_DIR}/README.restore.with.innobackupex
echo "HOW to restore this FULL mysql backup" >> ${export_DIR}/README.restore.with.innobackupex
echo "=========================================================================================================" >> ${export_DIR}/README.restore.with.innobackupex
echo "DO IT IN A SCREEN, as it involves some file movements:" >> ${export_DIR}/README.restore.with.innobackupex
echo "screen -S mysqlRestore" >> ${export_DIR}/README.restore.with.innobackupex
echo "service stop mysql" >> ${export_DIR}/README.restore.with.innobackupex
echo "ps aux | grep mysql" >> ${export_DIR}/README.restore.with.innobackupex
echo "rsync -avh /var/lib/mysql /var/lib/mysql.BAD" >> ${export_DIR}/README.restore.with.innobackupex
echo "rm -rf /var/lib/mysql" >>  ${export_DIR}/README.restore.with.innobackupex
echo "mkdir -p /var/lib/mysql && chown -R mysql:mysql /var/lib/mysql" >>  ${export_DIR}/README.restore.with.innobackupex
echo "innobackupex --copy-back ${export_MySQL_DIR}" >>  ${export_DIR}/README.restore.with.innobackupex
echo "=========================================================================================================" >> ${export_DIR}/README.restore.with.innobackupex
echo "more info at http://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/restoring_a_backup_ibk.html:" >>  ${export_DIR}/README.restore.with.innobackupex
cat /root/bin/README.restore.with.innobackupex >> ${export_DIR}/README.restore.with.innobackupex
innobackupex --ibbackup=xtrabackup --no-timestamp ${export_MySQL_DIR}
test $? -gt 0 && echo "$(date) : xtrabackup failed at ${export_MySQL_DIR}" && exit 0
innobackupex --apply-log ${export_MySQL_DIR}
find ${NFS}/ -daystart -maxdepth 1 -ctime +${days_to_keep} -type d -name "*${HOSTNAME}*" -exec /bin/rm -rf {} \;

#Slackware4Life!

the README.restore.with.innobackupex content:

cat /root/bin/README.restore.with.innobackupex
Restoring a Full Backup with innobackupex
For convenience, innobackupex has a –copy-back option,
which performs the restoration of a backup to the server’s datadir

$ innobackupex –copy-back /path/to/BACKUP-DIR
It will copy all the data-related files back to the server’s datadir,
determined by the server’s my.cnf configuration file.
You should check the last line of the output for a success message:
==================================================
=> innobackupex: Finished copying back files. <=
=> 111225 01:08:13 innobackupex: completed OK! <=
==================================================

Note The datadir must be empty; Percona XtraBackup innobackupex –copy-back option will not copy over existing files.
Also it’s important to note that MySQL server needs to be shut down before restore is performed.
You can’t restore to a datadir of a running mysqld instance (except when importing a partial backup).
As files’ attributes will be preserved, in most cases you will need to change the files’ ownership
to mysql before starting the database server, as they will be owned by the user who created the backup:

$ chown -R mysql:mysql /var/lib/mysql
Also note that all of these operations will be done as the user calling innobackupex,
you will need write permissions on the server’s datadir.

Fedora 17 Gnome3 restart X/gdm from command line on locked or frozen screen

March 8, 2013 Leave a comment

I locked my Lenovo Fedora 17 and I was not able to bring back the login screen, no idea why.

Anyway, I hate to reboot linux powered devices, so the solution was :
from the old Slackware days : get a shell !

1. CTRL+ALT+F2 to get to the shell, login as the root
2, check the actual gdm service name like :

root@ivan.hi.inet:[Fri Mar 08 15:06:55][~]$ systemctl list-units | grep -i Display
prefdm.service            loaded active running       Display Manager

then restart it as

systemctl restart prefdm.service

and you will get back to the default X login screen 🙂

well make sure you have w3m installed , I uses it to search for the solution:
Resource links : [SOLVED] Restart GDM/X from command line

Cheers,

and Slackware4Life!

Categories: bash, Fedora 17 64 bit Tags: ,

oneliner how to clear store of particular old MO/DLR messages of Kannel bearerbox version `svn-r5011M’.

February 10, 2013 Leave a comment

We assume the kannel stora is spooled at /var/log/kannel/kannel.spool/
How to clean the old non sent/routed MO messages from 1st to 9th Feb 2013


root@darkstar:[Sun Feb 10 17:28:56]:[~/bin]$ curl -s "http://127.0.0.1:13000/store-status.txt?username=xxxxx&password=xxxx" | awk '$2~/\[MO\]/{print $0};$3~/\[2013-02-0/{ gsub(/[\[\]]/,"",$1);print $1}'  | xargs --no-run-if-empty -icrap find /var/log/kannel/kannel.spool/ -type f -name crap -exec rm -f {} \;

serice kannel restart

Categories: bash, CentOS, Kannel, Linux, Slackware Tags: , , , , ,

MySQL Cluster mysql-5.5.22 ndb-7.2.6 on Linux RHEL 6.3 (Santiago) : restore backup script

February 4, 2013 Leave a comment

Latest update : restore has been done with the exception of the full MySQL cluster restore – I need to do more tests here.
Some code clean up to be done as well , so is someone is in interested check the git from now on.

Update – migrated to bitbucket: https://bitbucket.org/seikath/epg-mysql-cluster , still under development

After digging for more detailed info whats the proper execution of the MySQL cluster restore procedure , I came up with that script:

Note, the real restore is deactivated here and there is one “s” added to the ndb_* commands in a view to avoid any fuckups with the PROD ENV where I wrote it

The script has been tested at the DEV ENV with some small changes

Second note: due to security restrictions I decided to execute the restore from one of the data nodes, in a view to save some extra scripting.

#!/bin/sh
# epgbcn4 aka seikath@gmail.com
# is410@epg-mysql-memo1:~/bin/epg.mysql.cluster.restore.sh
# moved to bitbucket : 2013-02-06.15.26.48
# 2013-02-17.03.14.12
# 2013-02-17.14.07.09 - add backup data maintenance feature 
# So far on RHEL .. porting to other distors after its done for RHEL 

SCRIPT_NAME=${0%.*}
LOG_FILE="$(basename ${SCRIPT_NAME}).$(date +%Y-%m-%d.%H.%M.%S).log"
CONF_FILE=${SCRIPT_NAME}.conf
TMP_WORL_FILE="/tmp/${HOSTNAME}.$(basename ${SCRIPT_NAME}).tmp"

# Loading configuraion 
if [ -f "${CONF_FILE}" ]
then
        source "${CONF_FILE}"
else 
        logit "Missing config file ${CONF_FILE} !  Exiting now."
        exit 0
fi
# activating debug 
test $DEBUG -eq 1 && set -x

# initialize the tmp file 
test `echo "" >  "${TMP_WORL_FILE}"` && logit "${TMP_WORL_FILE} initialized!"

function logit () {
    echo "$(date)::[${HOSTNAME}] : ${1}"
    echo "$(date)::[${HOSTNAME}] : ${1}" >> "${LOG_FILE}"
}

# getting the user ID, check sudo, ndbd restart command 
check_is_root=$(id | sed '/^uid=0/!d' | wc -l)
user_name=$(id -nu)

local_command_ndbd=$(chkconfig --list| grep ndbd | awk '{print $1}')
command_ndbd="service ${local_command_ndbd} restart-initial"
command_restar_ndbd="sudo ${command_ndbd}"
sudo_status=$(sudo -l | tr '\n|\r' ' ' | sed 's/^.*User /User /;s/  */ /g' | grep -i "${user_name}")
no_passwd_check=0
test `echo ${sudo_status} | grep "(ALL) NOPASSWD: ALL" | wc -l ` -gt 0  && no_passwd_check=1
test $check_is_root -eq 1 && command_restar_ndbd="${command_ndbd}"




if [ ${no_passwd_check} -eq 1 ]
then 
        add_sudo="sudo ";
else 
        add_sudo="";
fi 

# check initial credential 
logit "UID check : ${user_name}"
logit "Sudo check : ${sudo_status}"
test ${no_passwd_check} -eq 1 && logit "No Passwd sudo check : Confirmed!"
test ${no_passwd_check} -eq 0 && logit "No Passwd sudo check : NOTE -> Missing passwordless sudo!"
#logit "Got ndbd sercice restart command to be run by user ${user_name} : ${command_restar_ndbd}"

# get the available active IPs:
local_ip_array=$(${add_sudo}ifconfig  | grep "inet addr:" | grep -v grep | awk '{print $2}' | sed 's/^addr://')

# get the ndbd data 
data=$(${add_sudo}ndb_config -c ${ndb_mgmd[1]},${ndb_mgmd[2]} --type=ndbd --query=id,host,datadir -f ' ' -r '\n') 

# get the recent data node ID, its IP and the data directory used
# check the ndbd start-stop command name of all ndbd data nodes
echo "${data}" | \
while read nodeID  nodeIP backupDir
do      
        logit "Getting the ndbd start script name from ${user_name}@${nodeIP}"
        command_ndbd[${nodeID}]=$(echo "${add_sudo}/sbin/chkconfig --list" | ${ssh_command} ${user_name}@${nodeIP}  | grep ndb | awk '{print $1}')
        localHit=0;
        for IP in ${local_ip_array}
        do
                test "${nodeIP}" == "${IP}" && localHit=1 && break;
        done  
        echo -e "${nodeIP}\t${nodeID}\t${backupDir}${LocalBackupDirName}\t${command_ndbd[${nodeID}]}\t${localHit}" >> "${TMP_WORL_FILE}"
done

# load the the recent data node ID, its IP and the data directory used
if [ -f "${TMP_WORL_FILE}" ]
then 
        ndbd_cntr=0;
        while read tmp_IP tmp_nodeID tmp_backupDir tmp_command_ndbd tmp_localHit
        do
                test -z ${tmp_localHit} && continue;
                if [ ${tmp_localHit} -eq 1 ] 
                then
                        IP="${tmp_IP}";
                        nodeID="${tmp_nodeID}";
                        backupDir="${tmp_backupDir}"
                        
                fi 
                command_ndbd[${nodeID}]="${tmp_command_ndbd}";
                ndbd_data_node_id[$ndbd_cntr]="${tmp_nodeID}";
                ndbd_data_IP[$ndbd_cntr]="${tmp_IP}";
                ndbd_data_bckp_dir[$ndbd_cntr]="${tmp_backupDir}";
                ndbd_data_cmd[$ndbd_cntr]="${tmp_command_ndbd}";
                ndbd_data_local[$ndbd_cntr]=${tmp_localHit};
                ((++ndbd_cntr))
        done < "${TMP_WORL_FILE}"
else
        logit "Missing data collection at ${TMP_WORL_FILE}. Exiting no." && exit 0;     
fi

logit "got Local machine IP ${IP}";
logit "got Local machine MySQL cluster nodeID : ${nodeID}";
logit "got MySQL cluster local backup Dir : ${backupDir}";
logit "got MySQL cluster RHEL local service command : ${local_command_ndbd}";

# choose other backup available
while [ 1  ]
do 
        read  -r -p "$(date)::[${HOSTNAME}] : Do you want to choose ANOTHER backup forder : Yes/No [y/n] : "  choice
        if [ "$choice" != "" ]
        then
                case $choice in
                "Yes" | "yes" | "y" | "Si" | "si" | "Y")
                while [ 1  ]
                do 
                        read  -r -p "$(date)::[${HOSTNAME}] : Please provide the full name of the PARENT backup forder or hit CTRL+C to terminate...: "  chosenDIR
                        if [ -d "${chosenDIR}" ]
                        then
                                echo "";
                                backupDir="${chosenDIR}"
                                break 2;
                        else 
                                logit "We can not find the PARENT backup forder of ${chosenDIR}"
                                echo ""
                        fi
                done
                ;; 
                "No" | "n" | "N" )
                logit "Proceeding with the condifured nightly backup.."
                break;
                ;;
                *)
                logit "Empty imput, please provide the full name of the backup forder or hit CTRL+C to terminate:"
                ;;
                esac
        fi
done

# check read permissions at backupDir
add_sudo="";
logit "Cheking the read permissions of ${backupDir}.."
if [ ! -r "${backupDir}" ]
then 
        logit "User ${user_name} can not read the backup directory of ${backupDir}!";
        logit "Switching to sudo .."
        if [ ${no_passwd_check} -eq 0 ]
        then 
                logit "User ${user_name} can not read the backup directory of ${backupDir} with sudo neither!";
                exit 0;
        else 
                add_sudo="sudo "
        fi 
fi 

# check the content of the backup directory provided 
logit "DEBUG : check the content of the backup directory provided [${backupDir}]"
if [ -d "${backupDir}" ]
then
        ${add_sudo}ls -1rt "${backupDir}/" |  while read crap; do logit "Found possible local backup of ndb_mgmd id ${nodeID}::${IP} : [$crap]";done
fi

while [ 1  ]
do 
        read  -r -p "$(date)::[${HOSTNAME}] : Please choose local backup to restore or hit CTRL+C to terminate...:  "  paused
        if [ "$paused" != ""  -a -d "${backupDir}/${paused}" ]
        then
                paused=${paused%%/}
                NDB_BACKUP_NUMBER=${paused/*-/}
                NDB_BACKUP_DIR="${backupDir}/${paused}"
                NDB_BACKUP_LOG="${backupDir}/${paused}/${paused}.${nodeID}.log"
                break;
        else 
                echo ""
        fi
done


# check sudo availability 
add_sudo="";
logit "Cheking the read permissions of ${NDB_BACKUP_DIR}.."
if [ ! -r "${NDB_BACKUP_DIR}" ]
then 
        logit "User ${user_name} can not read the backup directory of ${NDB_BACKUP_DIR}!";
        logit "Switching to sudo .."
        if [ ${no_passwd_check} -eq 0 ]
        then 
                logit "User ${user_name} is missing sudo and can not read the backup directory of ${NDB_BACKUP_DIR}!";
                exit 0;
        else 
                add_sudo="sudo ";
        fi 
fi 

# check if there is backup log file in the backup directory 
logit "Cheking the read permissions of ${NDB_BACKUP_LOG}.."
${add_sudo}ls ${NDB_BACKUP_LOG}  >> /dev/null 2>&1
test $? -gt 1 && logit "Error : ${NDB_BACKUP_LOG} is missing at ${NDB_BACKUP_DIR} ! Exiting now." && exit 0;

# checking the backup consistency:
if [ -d "${NDB_BACKUP_DIR}" ]
then
        logit "We are about to proceed with the restore of the backup at ${NDB_BACKUP_DIR}:  $(${add_sudo}ls -lrth ${NDB_BACKUP_DIR})"
        logit "Checking the backup consistency:"
        NDB_BACKUP_STATUS=$(${add_sudo}ndb_print_backup_file "${NDB_BACKUP_LOG}")
        test `echo ${NDB_BACKUP_STATUS}  | grep -i "NDBBCKUP" | wc -l ` -eq 0 && logit "${NDB_BACKUP_LOG} is NOT NDB consistane Backup file!" && exit 0
        # echo "${NDB_BACKUP_STATUS}"
        logit "Confirmed : ${NDB_BACKUP_DIR} contains consistent backup"
else 
        logit "ERROR : Missing NDB BACKUP directory ${NDB_BACKUP_DIR}!"
fi

#  choose the restore type: full restore with drop database or table restore
logit "Starting the restore type questionaire: "

restoreStringInclude="";
while [ 1  ]
do 
        read  -r -p "$(date)::[${HOSTNAME}] : Please choose the restore type : FULL MySQL cluster [F], DATABASE [D] or TABLE [T] to restore OR hit CTRL+C to terminate : "  restore
        if [ "$restore" != "" ]
        then
                case $restore in
                "F" | "f" | "FULL" | "Full" )
                restoreStringInclude="-m"; # restore MySQL cluster table metadata 
                logit "Proceeding with the FULL MySQL BACKUP restore.";
                break;
                ;; 
                "D" | "d" | "Database" | "DATABASE" )
                logit "Make sure the database is existing, otherwise the restore will fail and you would need full MySQL initialization restore"
                # add here check of the MySQL cluster data nodes status 
                logit "Fetching the databases from the MySQL cluster ... "
                # Fetch the databases from the MySQL Cluster : 
                data_ndb_databases_online=$(${add_sudo}ndb_show_tables -c ${ndb_mgmd[1]},${ndb_mgmd[2]} -t 2 | awk '$1 ~ /^[[:digit:]]/ && $2 == "UserTable" && $3 == "Online"  {print $5}' | sort | uniq)
                cntr=0;
                for DbName in ${data_ndb_databases_online}
                do 
                        ((++cntr));
                        dbArrayName[${cntr}]="${DbName}";
                        comma=" => ";
                        test $cntr -gt 9 &&  comma=" : "
                        logit "Found database${comma}[${DbName}]";
                        lastdbArrayName="${DbName}";
                done

                # Get the users Database choice
                while [ 1  ]
                do 
                        logit "You may provide a comma separated list of databases to restore.";
                        test ${#dbArrayName[@]} -gt 1 && logit "Example: ${dbArrayName[1]},${lastdbArrayName}";
                        test ${#dbArrayName[@]} -eq 1 && logit "Example: ${dbArrayName[1]}";
                        read  -r -p "$(date)::[${HOSTNAME}] : Please provide the DATABASE NAMES OR hit CTRL+C to terminate : "  userDbNames;
                        if [ "${userDbNames}" != "" ]
                        then
                                # Read the user choices
                                IFS=', ' read -a ArrayUserDbNames <<< "${userDbNames}"
                                # checking the user data consistency
                                logit "Checking the databases.."
                                DbNameOnly_restrore_string="";
                                for idx in "${!ArrayUserDbNames[@]}"
                                do
                                        crap[$idx]=1;

                                        for DbNameOnly  in ${data_ndb_databases_online}
                                        do
                                                if [ "${ArrayUserDbNames[idx]}" == "${DbNameOnly}" ]
                                                then 
                                                        commat="";
                                                        test "${DbNameOnly_restrore_string}" != "" && commat=",";
                                                        crap[$idx]=0;
                                                        logit "[${ArrayUserDbNames[idx]}] : Confirmed";
                                                        break;
                                                fi
                                        done
                                        DbNameOnly_restrore_string="${DbNameOnly_restrore_string}${commat}${ArrayUserDbNames[idx]}";
                                        test ${crap[idx]} -eq 1 \
                                        && logit "Note : the database ${ArrayUserDbNames[idx]} is missing in the curent MySQL Cluster!" \
                                        && logit "We recommend restore witj DDL/metadata" \
                                        && logit "After a successfull restore of a MISSING database you HAVE TO CREATE IT by \"mysql> create database ${ArrayUserDbNames[idx]};\"" \
                                        && logit "Then all the restored tables and data will be accessible.";
                                                                                
                                done
                                # check if the DDL should be restored as well :
                                while [ 1  ]
                                do
                                        read  -r -p "$(date)::[${HOSTNAME}] : Do you want the table metadata to be restored as well? Y/N : "  restoreDDL;
                                        if [ "${restoreDDL}" != "" ]
                                        then
                                                case ${restoreDDL} in
                                                "Y" | "y" | "yes" | "Yes" | "YES" )
                                                logit "Including the DDLL/meta table data restore";
                                                restoreStringInclude="-m --include-databases=${DbNameOnly_restrore_string}";
                                                break;
                                                ;;
                                                "N" | "n" | "No" | "NO" | "Non" )
                                                logit "Skipping the DDL/meta table data restore";
                                                restoreStringInclude="--include-databases=${DbNameOnly_restrore_string}";
                                                break;
                                                ;;
                                                *)
                                                logit "Please choose [Y]es or [N]O!"
                                                ;;
                                                esac
                                        fi
                                done 
                                #logit "Proceeding with the BACKUP of the database(s) ${DbNameOnly_restrore_string}"
                                #logit "DEBUG : restoreStringInclude : ${restoreStringInclude}";
                                # restoreStringInclude="--include-databases=${DbNameOnly_restrore_string}";
                                break 2;
                        else 
                                logit "Empry database(s) name to be restored!"
                        fi
                done 

                logit "Proceeding with the FULL DATABASE BACKUP. To be done just like the table backup"
                break;
                ;; 
                "T" | "t" )
                logit "Make sure the database.table is existing, otherwise the restore will fail."
                logit "Fetching the databases and its tables from the MySQL cluster ... "
                # get the database.table list from the mysql cluster
                data_ndb_databases_tables_online=$(${add_sudo}ndb_show_tables -c ${ndb_mgmd[1]},${ndb_mgmd[2]} -t 2 | awk  ' ($1 ~ /^[[:digit:]]/ && $7 !~ /^NDB\$BLOB/) {print $5"."$7}' | sort | uniq)                cntr=0
                # print a list of the db.tables available atm 
                cntr=0;
                for DbNameAndTable  in ${data_ndb_databases_tables_online}
                do 
                        ((++cntr));
                        dbArray[${cntr}]="${DbNameAndTable}";
                        comma="  : ";
                        test $cntr -gt 9 &&  comma=" : "
                        logit "[${cntr}]${comma}[${DbNameAndTable}]";
                        lastdbArray="${DbNameAndTable}";
                done
                # Get the users Database and table choice
                DbNameTable_restrore_string="";
                while [ 1  ]
                do 
                        logit "You may provide a comma separated list of tables to restore.";
                        test ${#dbArray[@]} -gt 1 && logit "Example: ${dbArray[1]},${lastdbArray}";
                        test ${#dbArray[@]} -eq 1 && logit "Example: ${dbArray[1]}";
                        read  -r -p "$(date)::[${HOSTNAME}] : Please provide the full name of the table(s) OR hit CTRL+C to terminate : "  tableName;
                        if [ "${tableName}" != "" ]
                        then
                                # Read the user choices
                                IFS=', *' read -a userTables <<< "${tableName}"
                                # checking the user data consistency
                                logit "Checking the tables.."
                                for idx in "${!userTables[@]}"
                                do
                                        crap[$idx]=1;
                                        for DbNameAndTable  in ${data_ndb_databases_tables_online}
                                        do
                                                if [ "${userTables[idx]}" == "${DbNameAndTable}" ]
                                                then
                                                        commat="";
                                                        test "${DbNameTable_restrore_string}" != "" && commat=",";
                                                        crap[$idx]=0;
                                                        logit "[${userTables[idx]}] : Confirmed";
                                                        break;
                                                fi 
                                        done
                                        DbNameTable_restrore_string="${DbNameTable_restrore_string}${commat}${userTables[idx]}";
                                        test ${crap[idx]} -eq 1 && logit "NOTE : Table ${userTables[idx]} is missing in the curent MySQL Cluster!";
                                done

                                # check if the DDL should be restored as well :
                                while [ 1  ]
                                do
                                        read  -r -p "$(date)::[${HOSTNAME}] : Do you want the table metadata to be restored as well? Y/N : "  restoreDDL;
                                        if [ "${restoreDDL}" != "" ]
                                        then
                                                case ${restoreDDL} in
                                                "Y" | "y" | "yes" | "Yes" | "YES" )
                                                logit "Including the DDLL/meta table data restore";
                                                restoreStringInclude="-m --include-tables=${DbNameTable_restrore_string}";
                                                break;
                                                ;;
                                                "N" | "n" | "No" | "NO" | "Non" )
                                                logit "Skipping the DDL/meta table data restore";
                                                restoreStringInclude="--include-tables=${DbNameTable_restrore_string}";
                                                break;
                                                ;;
                                                *)
                                                logit "Please choose [Y]es or [N]O!"
                                                ;;
                                                esac
                                        fi
                                done 

                                #restoreStringInclude="--include-tables=${DbNameTable_restrore_string}";
                                logit "Proceeding with the BACKUP of the tables ${DbNameTable_restrore_string}"
                                break 2;
                        else 
                                logit "Empry table name to be restored!"
                        fi
                done 
                ;;
                *)
                logit ": Please choose the restore type : FULL DATABASE restore including database [F] or TABLE [T]restore OR hit CTRL+C to terminate... [F(ull)]/[T(able)]:  "
                ;;
                esac
        fi
done

logit "About to execute the restore procedure with the following options : [${restoreStringInclude}]."
# possible stupid question to add : Do you want to proceed ? Y/N [Y]
# checking the available API nodes :
logit "Checking the available API nodes:"
api_data=$(${add_sudo}ndb_mgm --ndb-mgmd-host=${ndb_mgmd[1]},${ndb_mgmd[2]} -e 'show' | sed  '/^\[mysqld(API)\]/,$!d;/^ *$/d')
echo "${api_data}"
#get the first node : 
echo "${api_data}" | sed  '/^\[mysqld(API)\]/d' | \
while read  API_NODE_ID API_NODE_IP crap
do
        API_NODE_ID=${API_NODE_ID/*=/}
        test `echo "${crap}" | grep "not connected" | wc -l` -gt 0 && logit "Skipping NOT CONNECTED API Node ID [${API_NODE_ID}] ${API_NODE_IP}{$crap}" && continue;
        API_NODE_IP=${API_NODE_IP/@/}
        logit "Procceding MySQL CLuster API NODE [${API_NODE_ID}] at [${API_NODE_IP}]"
        API_NODE_ID=${API_NODE_ID/*=/}
        # set the API node in single user more :
        case $restore in
        "F" | "f" | "FULL" | "Full" )
                # loop again the data nodes 
                logit "The Full MySQL custer restore has been deactivated at that time. The proceeding will be added after extensive testing."
                exit 0;
                ndbd_initial_status=1;
                for idx in $(seq 0 $((${#ndbd_data_node_id[@]} - 1)))
                do
                        ndbd_start_status[$idx]=$(echo "ps aux | grep -v grep | grep -i ndbd | sed '1,1!d'" | ${ssh_command} ${user_name}@${ndbd_data_IP[idx]})
                        if [ "${ndbd_start_status[idx]}" != "" -a "${ndbd_start_status[idx]}" != "${ndbd_start_status[idx]/--initial/}" ]
                        then
                                
                                logit "MySQL Cluster NDB DATA NODE [${ndbd_data_node_id[idx]}] runnig in initial mode, no restart needed";
                        elif [ "${ndbd_start_status[idx]}" == "" ]
                        then
                                ndbd_initial_status=0;
                                logit "MySQL Cluster NDB DATA NODE [${ndbd_data_node_id[idx]}] is runnig in start mode, restart in initial mode is needed.";
                                logit "Executing restart initial at NDBD node [${ndbd_data_node_id[idx]}]";
                        else
                                ndbd_initial_status=0;
                                logit "MySQL Cluster NDB DATA NODE [${ndbd_data_node_id[idx]}] is NOT runnig ";
                        fi 
                        logit "DEBUG: idx: [${idx}] : ${ndbd_start_status[idx]}";
                done
                if [ ${ndbd_initial_status} -eq 1  ]
                then
                        logit "Check MySQL CLuster single user mode status";
                        mysql_sluster_status=$(${add_sudo}ndb_mgm --ndb-mgmd-host=${ndb_mgmd[1]},${ndb_mgmd[2]} -e 'show' | sed '/ndbd/,/^ *$/!d;/^ *$/d;/^id/!d;/single user mode/!d' | wc -l)
                        if [ ${mysql_sluster_status} -eq $((${#ndbd_data_node_id[@]}-1)) ]
                        then
                                logit "Setting the MySQL CLuster DATA NODE [${API_NODE_ID}] at ${API_NODE_IP}] in single user mode";
                                mysql_sluster_set_sinlge_user_mode=$(${add_sudo}ndb_mgm --ndb-mgmd-host=${ndb_mgmd[1]},${ndb_mgmd[2]} -e 'enter single user mode ${API_NODE_ID}');
                        else
                                logit "No need to set the single user mode as its already activated";
                                logit "Executing FULL restore with table metadata."
                                cmd_restore="${add_sudo}ndb_restore -c ${API_NODE_IP}  ${restoreStringInclude} -b ${NDB_BACKUP_NUMBER} -n ${nodeID} -r ${NDB_BACKUP_DIR}"
                                logit "${cmd_restore}"
                                mysql_sluster_restore_result=$(${cmd_restore})
                                echo "${mysql_sluster_restore_result}"


                        fi
                else
                        logit ""
                fi
                exit 0 ;
                logit "ssh -q -nqtt -p22 ${user_name}@${ndbd[1]} '${command_restar_ndbd}' restart-initial"
                logit "DEBUG : have to find the restart command at the other node !"
                logit "ssh -q -nqtt -p22 ${user_name}@${ndbd[2]} '${command_restar_ndbd}' restart-initial"
                logit "Cheking the status of ndbd at  ${ndbd[1]}"
                logit "${ssh_command} ${user_name}@${ndbd[1]} '${command_restar_ndbd} status'"
                ndbd_status[]echo "${command_restar_ndbd} status" | ${ssh_command} ${user_name}@${ndbd[1]}
                logit "Cheking the status of ndbd at  ${ndbd[2]}"
                logit "ssh -q -nqtt -p22 ${user_name}@${ndbd[2]} '${command_restar_ndbd} status'"
                logit "Setting the API node [${API_NODE_ID}] in single user"
                # possible check if the user wants to clean up the mysql cluster DB like executing drop database ... create database
                logit "${add_sudo}ndb_mgms --ndb-mgmd-host=${ndb_mgmd[1]},${ndb_mgmd[2]} -e 'enter single user mode ${API_NODE_ID}'" 
                status=$(${add_sudo}ndb_mgm --ndb-mgmd-host=${ndb_mgmd[1]},${ndb_mgmd[2]} -e 'show' | grep "^id={$nodeID}" | grep "@${IP}")
                logit "Cluster status of ndbd id ${nodeID} : ${status}"
                logit "${add_sudo}ndb_restores  -c ${API_NODE_IP}  ${restoreStringInclude} -b ${NDB_BACKUP_NUMBER} -n ${nodeID} -r ${NDB_BACKUP_DIR}"
                logit "Exiting the single user more:"
                logit "${add_sudo}ndb_mgms --ndb-mgmd-host=${ndb_mgmd[1]},${ndb_mgmd[2]} -e 'exit single user mode'"
        ;;
        "D" | "d" | "Database" | "DATABASE" )
                logit "Starting the restore process for databases(s) ${DbNameOnly_restrore_string}, please wait a bit .. "
                restore_result=$(${add_sudo}ndb_restore  -c ${API_NODE_IP}  ${restoreStringInclude} -b ${NDB_BACKUP_NUMBER} -n ${nodeID} -r "${NDB_BACKUP_DIR}" 2>&1 | tee -a "${LOG_FILE}")
                what_to_see=$(echo ${restore_result} | sed '/^Processing data in table/d')
                if [ "${what_to_see}" != "${what_to_see/NDBT_ProgramExit: 0 - OK/}" ]
                then 
                        logit "The restore was successful! detailed log at ${LOG_FILE} ."
                        logit "Slackware4File!";
                elif [ "${what_to_see}" != "${what_to_see/Unable to find table:/}" ]
                then 
                        logit "The restore FAILED due to missing/broken tables! Detailed log at ${LOG_FILE}"
                        logit "We recommed restore the table metadata of $(echo ${what_to_see} | sed 's/^.*Unable to find table:/Unable to find table:/;s/^Unable to find table: //;s/ .*$//' ) table";
                elif [ "${what_to_see}" != "${what_to_see/Missing column/}" ]
                then
                        logit "The restore FAILED due to missing/broken fields in a table! Detailed log at ${LOG_FILE}";
                        logit "We recommed full full restore with table metadata.";
                elif [ "${what_to_see}" != "${what_to_see/Schema object with given name already exists/}" ]
                then
                        logit "The restore FAILED due to attempt to create an exsisting table! Detailed log at ${LOG_FILE}";
                        logit "We recommed the following steps:";
                        logit "1. Restore without the table metadata OR";
                        logit "2. In case the step fails due to missing tables we reccomend FULL restore with dropping the database";
                else
                        logit "The restore FAILED";
                fi
        ;;
        "T" | "t" )
                logit "Starting the restore process for table(s) ${DbNameTable_restrore_string}, please wait a bit .. "
                restore_result=$(${add_sudo}ndb_restore  -c ${API_NODE_IP}  ${restoreStringInclude} -b ${NDB_BACKUP_NUMBER} -n ${nodeID} -r "${NDB_BACKUP_DIR}" 2>&1 | tee -a "${LOG_FILE}")
                what_to_see=$(echo ${restore_result} | sed '/^Processing data in table/d')
                if [ "${what_to_see}" != "${what_to_see/NDBT_ProgramExit: 0 - OK/}" ]
                then 
                        logit "The restore was successful! detailed log at ${LOG_FILE} ."
                        logit "Slackware4File!";
                elif [ "${what_to_see}" != "${what_to_see/Unable to find table:/}" ]
                then 
                        logit "The restore FAILED due to missing/broken tables! Detailed log at ${LOG_FILE}"
                        logit "We recommed full full restore with table metadata.";
                elif [ "${what_to_see}" != "${what_to_see/Missing column/}" ]
                then
                        logit "The restore FAILED due to missing/broken fields in a table! Detailed log at ${LOG_FILE}";
                        logit "We recommed the following steps:";
                        logit "1. We recommed table restore with DDL/table metadata restore";
                        logit "2. In case the step fails due to existing tables we recomend FULL restore with dropping the database";
                elif [ "${what_to_see}" != "${what_to_see/Schema object with given name already exists/}" ]
                then
                        logit "The restore FAILED due to attempt to create an exsisting table! Detailed log at ${LOG_FILE}";
                        logit "We recommed the following steps:";
                        logit "1. Restore without the table metadata OR";
                        logit "2. In case the step fails due to missing tables we recomend FULL restore with dropping the database";
                else
                        logit "The restore FAILED";
                fi
        ;;
        *)
                logit "Nothing to do here"
        ;;
        esac
        status=$(${add_sudo}ndb_mgm --ndb-mgmd-host=${ndb_mgmd[1]},${ndb_mgmd[2]} -e 'show' | grep "^id={$nodeID}" | grep "@${IP}")
        logit "Cluster status of ndbd id ${nodeID} : ${status}"
        break; # we execute on the first acive API node
done

and the config file used:

root@darkstar:[Tue Feb 05 00:06:27]:[~/bin]$ cat mysql.cluster.restore.conf 
# epgbcn4 aka seikath@gmail.com
# is410@epg-mysql-memo1:~/bin/epg.mysql.cluster.restore.sh
# 2013-02-17.03.42.08

ndbd[1]=10.95.109.195
ndbd[2]=10.95.109.196

ndb_mgmd[1]=10.95.109.216
ndb_mgmd[2]=10.95.109.217

LocalBackupDirName="backup/BACKUP"

ssh_command="ssh -T -p 22"

DEBUG=1 # active
DEBUG=0 # deactivated 


# add error handling array here for later handling via looping the array
restore_error_match[0]="Unable to find table:"
restore_error_text[0]="The restore FAILED due to missing/broken tables!"

Migrating MySQL 5.5.25a jiradb ERROR 2013 (HY000) on huge single db import

January 18, 2013 Leave a comment

well, I incremented max_allowed_packet from 16M to 512M
anyway, I got the same error on the next clean import.
so decided to find a workaround.
so, how to get the data separated from the ddl statements:

# get the tables names into the insert statement, its better to have that in file for future usage
sed '/^INSERT INTO/!d;s/ VALUES.*$//' jiradb.20130118.sql | sort | uniq > tablas.como.nombres.txt

then how to get the data separated:

root@jiragg:[Fri Jan 18 15:26:33]:[/usr/local/BACKUP]$ cat make.inserts.sh
#!/bin/sh
# trim function thank to http://stackoverflow.com/questions/369758/how-to-trim-whitespace-from-bash-variable
# and http://codesnippets.joyent.com/posts/show/1816
trim() {
    local var=$1
    var="${var#"${var%%[![:space:]]*}"}"   # remove leading whitespace characters
    var="${var%"${var##*[![:space:]]}"}"   # remove trailing whitespace characters
    echo -n "$var"
}

while read tabname
do
        tablename=$(trim $(echo $tabname | sed 's/INSERT INTO//;s/[[:punct:]]*//g'))
        echo "${tabname}:=>${tablename}"
        sed "/^INSERT INTO \`${tablename}\` VALUES/!d" /usr/local/BACKUP/jiradb.20130118.sql | gzip > "/usr/local/BACKUP/${tablename}.jiradb.20130118.sql.gz"
done < tablas.como.nombres.txt

how to get the DDLs:

sed '/^INSERT INTO/d' jiradb.20130118.sql > non.insert.jiradb.20130118.sql

how to import the whole jiradb:

# create the empty database
mysql --defaults-file=~/..credentials.jira -e 'create database jiradb;'
#import the DDLs:
mysql --defaults-file=~/..credentials.jira jiradb < /usr/local/BACKUP/non.insert.jiradb.20130118.sql
# make a list of the complressed datafiles per table:
find /usr/local/BACKUP/ -type f -name "*sql.gz"  > list.import.tables.txt
# execute simple export script 
root@jiragg:[Fri Jan 18 15:35:05]:[/usr/local/BACKUP]$ cat import.tables.sh
#!/bin/sh
while read tablefile
do
	echo -n "importing ${tablefile}: "
	zcat "${tablefile}" |  mysql --defaults-file=~/..credentials.jira jiradb
	echo ""
done < list.import.tables.txt

and that is it
anyway do not forget to increment the max_allowed_packet

Slackware4Life !

Categories: bash, CentOS, Linux, MySQL, RHEL Tags: , , ,