Unloading a database table larger than 2 Gig


[Editor Note: if you do not have gzip/gunzip (why not ?), you can use compress/
uncompress instead with the .Z file extension]


You'll have to use named pipes, this is basics of what we use.
You can script this up to load/unload a few tables at once but not too many
as you end up trying to spawn too many processed pipes and the whole thing
blows up.  
 
To copy a table out....
# Create the named pipe file
\rm -f npipe.$$
/etc/mknod npipe.$$ p

# Run the sql in the background
sql -u$DBOWNER $DBNAME << EOF > $TABLENAME.unload.log &
\t
copy $TABLENAME() into 'npipe.$$'
\p\t\g
EOF 
print -- "unloading $TABLENAME to $TABLENAME.out.gz at $(date +%T)"
 
# Compress the contents of the pipe to a file
gzip -c < npipe.$$ > $TABLENAME.out.gz
 
#Remove named pipe
 
\rm npipe.$$
exit
 
And to copy the table back in....
 
# Create the named pipe file
\rm -f npipe.$$
/etc/mknod npipe.$$ p
 
# Uncompress the contents of the pipe to a file
gunzip -c $TABLENAME.out.gz > npipe.$$ &
print -- "reloading $TABLENAME from $TABLENAME.out.gz at $(date +%T)"
 
# Run the sql in the background
sql -u$DBOWNER $DBNAME << EOF >> $TABLENAME.load.log
\t
copy $TABLENAME() from 'npipe.$$'
\p\t\g
EOF

# Remove named pipe
\rm npipe.$$
exit

hope this helps

Chris Watson, Database Administration - Technical Support (NICL)
chris.watson@northern-electric.co.uk


Ingres Q & A
To William's Home Page

© William Yuan 2000

Email William