Installing Sybase Auditing

Sybase Auditing

  1. Create the auditing devices and auditing database with the Transact-SQL disk init and create database commands. For example:
USE master go DISK INIT NAME=’sybsecurity_data1′, PHYSNAME=’/sybdevices/cids/sybsecurity_data1.dat’, VDEVNO=60, SIZE=5120, VSTART=0, CNTRLTYPE=0, DSYNC=true go EXEC sp_diskdefault ‘sybsecurity_data1′,defaultoff go USE master go DISK INIT NAME=’sybsecurity_data2′, PHYSNAME=’/sybdevices/cids/sybsecurity_data2.dat’, VDEVNO=61, SIZE=5120, VSTART=0, CNTRLTYPE=0, DSYNC=true go EXEC sp_diskdefault ‘sybsecurity_data2′,defaultoff go USE master go DISK INIT NAME=’sybsecurity_data3′, PHYSNAME=’/sybdevices/cids/sybsecurity_data3.dat’, VDEVNO=62, SIZE=5120, VSTART=0, CNTRLTYPE=0, DSYNC=true go EXEC sp_diskdefault ‘sybsecurity_data3′,defaultoff go USE master go DISK INIT NAME=’sybsecurity_log1′, PHYSNAME=’/sybdevices/cids/sybsecurity_log1.dat’, VDEVNO=63, SIZE=2560, VSTART=0, CNTRLTYPE=0, DSYNC=true go EXEC sp_diskdefault ‘sybsecurity_log1’,defaultoff USE master go CREATE DATABASE sybsecurity ON sybsecurity_data1=10 LOG ON sybsecurity_log1=5 go USE master go EXEC sp_dboption ‘sybsecurity’,’trunc log on chkpt’,true go EXEC sp_dboption ‘sybsecurity’,’abort tran on log full’,true go use sybsecurity go EXEC sp_changedbowner ‘sa’ Go checkpoint go –          Use isql to execute the installsecurity script: o   cd $SYBASE/ASE-12_5/scripts setenv DSQUERY server_name isql -Usa –PpasswordSserver_name < installsecurity –          Shut down and restart Adaptive Server to initialise the auditing subsystem –          When you have completed these steps, the sybsecurity database has one audit table (sysaudits_01) created on its own segment. You can enable auditing at this time, but should add more auditing tables with sp_addaudittable. –          Now Create the extra sysaudits tables in sybsecurity: Extend the sybsecurity database to the device you initialized in step 1. use master go alter database sybsecurity on sybsecurity_data2= 10 go use sybsecurity go sp_addaudittable ‘sybsecurity_data2’ go use master go alter database sybsecurity on sybsecurity_data3= 10 go use sybsecurity go sp_addaudittable ‘sybsecurity_data3’ go –          Create the repository database sybaudit and the repository table audit_data USE master go CREATE DATABASE sybaudit ON DATA1=500 LOG ON LOG1=50 go USE master go EXEC sp_dboption ‘sybaudit’,’select into/bulkcopy/pllsort’,true go EXEC sp_dboption ‘sybaudit’,’trunc log on chkpt’,true go USE sybaudit go CHECKPOINT go USE sybaudit go EXEC sp_changedbowner ‘sa’ Go –          Create the audit_data table in sybaudit use sybaudit go CREATE TABLE dbo.audit_data ( event     smallint     NOT NULL, eventmod  smallint     NOT NULL, spid      smallint     NOT NULL, eventtime datetime     NOT NULL, sequence  smallint     NOT NULL, suid      int     NOT NULL, dbid      smallint     NULL, objid     int          NULL, xactid    binary(6)    NULL, loginname varchar(30)  NULL, dbname    varchar(30)  NULL, objname   varchar(255)  NULL, objowner  varchar(30)  NULL, extrainfo varchar(255) NULL, nodeid    tinyint      NULL ) LOCK ALLPAGES Go –          Add a sp_thresholdaction procedure to each of the segments created above, use the example below; Use sybsecurity go create procedure sp_audit_thresholdaction as declare @audit_table_number int select @audit_table_number = value from master.dbo.sysconfigures where name = “current audit table” exec sp_configure “current audit table”, 0, “with truncate” if @audit_table_number = 1 insert sybaudit.dbo.audit_data select * from sysaudits_01 truncate table sysaudits_01 if @audit_table_number = 2 insert sybaudit.dbo.audit_data select * from sysaudits_02 truncate table sysaudits_02 if @audit_table_number = 3 insert sybaudit.dbo.audit_data select * from sysaudits_03 truncate table sysaudits_03 return(0) –          To apply this action to the three segments issue the following command; Use sybsecurity go EXEC sp_addthreshold ‘sybsecurity’,’aud_seg_01′,200,’sp_audit_thresholdaction’ go EXEC sp_addthreshold ‘sybsecurity’,’aud_seg_02′,200,’sp_audit_thresholdaction’ go EXEC sp_addthreshold ‘sybsecurity’,’aud_seg_03′,200,’sp_audit_thresholdaction’ go –          Next create a procedure to flush all the data from sybsecurity into the table on sybaudit; Use sybaudit Go IF OBJECT_ID(‘dbo.sp_aud_flush_display’) IS NOT NULL BEGIN DROP PROCEDURE dbo.sp_aud_flush_display IF OBJECT_ID(‘dbo.sp_aud_flush_display’) IS NOT NULL PRINT ‘<<< FAILED DROPPING PROCEDURE dbo.sp_aud_flush_display >>>’ ELSE PRINT ‘<<< DROPPED PROCEDURE dbo.sp_aud_flush_display >>>’ END go CREATE PROCEDURE dbo.sp_aud_flush_display AS BEGIN exec sybsecurity..sp_audit_thresholdaction exec sybsecurity..sp_audit_thresholdaction exec sybsecurity..sp_audit_thresholdaction select distinct loginname from audit_data END go EXEC sp_procxmode ‘dbo.sp_aud_flush_display’,’unchained’ go –          Set the following parameters in the Sybase server; sp_configure “auditing”, 1 go sp_configure “suspend audit when device full”, 0 go –this enables auditing sp_configure “audit queue size”, 100 go –this sets up the queue size for handling auditing events –Now To audit the sa_role logins issue the following command; sp_audit “all”, “sa_role”, “all”, “on” go

Remote table creation

This process sets up a proxy table so that you can select and insert into a remote table on another server from your local server. It’s more elegant than doing regular bcp’s The following example defines the remote Adaptive Server table authors, starting with the server definition: 1.         Define a server named SYBASE. Its server class is ASEnterprise, and its name in the interfaces file is SYBASE: exec sp_addserver SYBASE, ASEnterprise, SYBASE 2.         Define a remote login alias. This step is optional. User “sa” is known to remote server SYBASE as user “sa,” password “timothy”: exec sp_addexternlogin SYBASE, sa, sa, timothy 3.         Define the remote authors table: create existing table authors ( au_id      id              not null, au_lname   varchar(40)     not null, au_fname   varchar(20)     not null, phone      char(12)        not null, address    varchar(40)     null, city       varchar(20)     null, state      char(2)         null, country    varchar(12)     null, postalcode char(10)        null ) EXTERNAL TABLE at “SYBASE.pubs2.dbo.authors” 4.         Update statistics in tables to ensure reasonable choices by the query optimizer: update statistics authors 5.         Execute a query to test the configuration: select * from authors where au_lname = ‘Carson’

Correcting identity row gaps

You can fix the rows containing the large numbers to correct gaps in the identity column. Perform the following steps:

set identity_insert table_name on

bcp out the data from the table.

set identity_insert table_name off

Drop and recreate the table with possibly a smaller numeric field for the identity column e.g numeric (5,0) instead of numeric (18,0)

bcp the table back in

Installing dbccdb

From scratch

1.               run sp_plan_dbccdb from the master database to get an estimate of the sizes needed 2.               create a dedicated data and log device for the dbccdb database and create the database onto these, or just create it on existing devices 3.               run isql –U –P –S –i$SYBASE/$SYBASE_ASE/scripts/installdbccdb to install all the necessary tables and stored procs in the the dbccdb database 4.               Add segments into the dbccdb database; use dbccdb go sp_addsegment scanseg, dbccdb, dbccdb_data1 go sp_addsegment textseg, dbccdb, dbccdb_data1 Where dbccdb_data1 is the data device dbccdb was created on 5.         Add a 16K pool if it doesn’t already exist;
               sp_poolconfig "default data cache", "150M", "16K"
go 5.               Create workspaces;
     use dbccdb
go sp_dbcc_createws dbccdb, scanseg, scan_pubs2, “scan”, “10M” go sp_dbcc_createws dbccdb, textseg, text_pubs2, “text”, “10M” go The values in MB are found from the highest values of the output from sp_plan_dbccdb earlier 6.   Configure databases into dbccdb; derive the required values from the sp_plan_dbccdb earlier
use dbccdb
go sp_dbcc_updateconfig pubs2,”max worker processes”, “4” go sp_dbcc_updateconfig pubs2, “dbcc named cache”, “default data cache”, “10K” go sp_dbcc_updateconfig pubs2, “scan workspace”, scan_pubs2 go sp_dbcc_updateconfig pubs2, “text workspace”, text_pubs2 go sp_dbcc_updateconfig pubs2, “OAM count threshold”, “5” go sp_dbcc_updateconfig pubs2, “IO error abort”, “3” go sp_dbcc_updateconfig pubs2,”linkage error abort”, “8” go You will need to increase the number of worker processes parameter to 100 and bounce the Sybase server, after it comes up set max parallel degree to 10 and max scan parallel degree to 3

Sybase DR – What to do if all devices deleted

In case all the device files vanish and you have to start from scratch these are the steps to take; 1)         Run buildmaster –d <path to master device> -s <size in pages> (or dataserver in ASE12.5) to create a new master device 2)         Start the dataserver in single user mode 3)         log in using sa with null password and do the following a)         update sysservers set srvnetname=”<Old backup server name>” b)         alter database master on master =46MB (or whatever the master db size was before) c)         shutdown and restart the server  and backupserver d)         load the master database from a recent dump 4)          The server will shut down after the load, start it up again in single user mode, this time though log in with your ususal sa account and execute the following; a)         In the sysusages table remove all but the first 2MB fragment for tempdb (dbid =2) and also change the segmap for this first fragment to 7 I)          update sysusages set segmap=7 where lstart=0 and dbid=2 go II)         delete from sysusages wheredbid=2 and lstart >1000 go b)         update sysdatabases set status=320 where name not in (“master”,”model”,”tempdb”,”sybsystemdb”) c)         execute dbcc dbrepair (sybsystemprocs, dropdb) d)         delete from sysdevices where name=”sysprocsdev” e)         shut down the dataserver 5)        Edit the runserver file to start the dataserver up in normal mode and start it up again 6)        Log in normally and create a sysprocsdev device, then create the sybsystemprocs database on it (200MB) and load this database from a recent dump. 7)        Now you just need to extend tempdb and create the other databases.

Sybase IQ Guide

Building new installation This is quite straight forward, simply put the CD into the drive and run the installation script, select to create the demo database, it’s always quite usefull.

Starting the server

The following command can be used; start_asiq @/tore/mjolner_iq/ASIQ-12_6/demo/asiqdemo.cfg the contents of the .cfg file could be something like; >>>>>>>>>>>>>>>>>>>>>>>> # asiqdemo.cfg # ———————————————————— # Default startup parameters for the ASIQ demo database # ———————————————————— -n  mjolner_asiqdemo /tore/mjolner_iq/ASIQ-12_6/demo/asiqdemo.db -x  tcpip{port=2638} # The following parameters are also found in the configuration file # $ASDIR/scripts/default.cfg.  Any parameters not specified below # and not in the start up parameter list, will be added by start_asiq # using default.cfg as a guide. -c  48m -gc 20 -gd all -gl all -gm 10 -gp 4096 -ti 4400 >>>>>>>>>>>>>>>>>>>>>>> note that the –n parameter gives the name of the iq server followed by the path to the database file.

Connecting to database/server

from unix type ; dbisql -nogui -host mjolner -port 2639 -jConnect -c “uid=DBA;pwd=SQL” or if you want the gui version omit the –nogui option, you need to set the display first though. From Windows use Interactive SQL or Sybase central Useful commands sp_iqstatus Display the current status of the IQ server/database incl used space etc. sp_iqdbsize Displays the size of the current database select * from sysiqfile Display information on the dbspaces

Creating a new database

The easiest place to do this from is using Interactive SQL logged into asiqdemo and then type the following; create database ‘/tore/mjolner_iq/ASIQ-12_6/iq1/iq1.db’ IQ PATH ‘/tore/mjolner_iq/ASIQ-12_6/iq1/iq1.iq’ IQ SIZE 12000 message path ‘/tore/mjolner_iq/ASIQ-12_6/iq1/iq1.iqmsg’ temporary path ‘/tore/mjolner_iq/ASIQ-12_6/iq1/iq1.iqtmp’ temporary size 6000 page size 4096;  

Importing data

To import a table from ASE into IQ use the following sequence; 1)         Extract the ddl for the table you want to transfer and create the table in the IQ database 2)         bcp out the data with a command like; bcp murex2000db.MUREXDB.MPY_GLOB_DBF out MPY_GLOB_DBF.out -c -Usybufji –P<password> -SCIDS_DEV_SQL1 -t \| 3)         From a dbisql or interactive SQL session execute a command like; load TABLE MPY_GLOB_DBF ( TIMESTAMP    ‘|’, M_IDENTITY   ‘|’, M_VTYP       ‘|’, M_VINT       ‘|’, M_MLAB0      ‘|’, M_SLAB0      ‘|’, M_MLAB1      ‘|’, M_SLAB1      ‘|’, M_VDAT       ‘|’, M_VREF       ‘|’, M_VVAL       ‘|’, M__INDEX_    ‘|’, M__REPLICAT_ ‘\n’ ) FROM ‘/tore/MPY_GLOB_DBF.out’ ESCAPES OFF QUOTES OFF; Note that there may be some problems if the original table has timestamp or identity columns, but these can be worked around.

DBCC faults – Corrective actions

!! Ignore any errors occurring on object syslogs they are more than likely transient !! The following error codes need the following actions; 100031 run dbcc tablealloc (table_name) 100035 run dbcc checktable (table_name, fix_spacebits)

Tempdb stress test

Stored proc to hammer tempdb, useful for tests. IF OBJECT_ID(‘dbo.heater’) IS NOT NULL BEGIN DROP PROCEDURE dbo.heater IF OBJECT_ID(‘dbo.heater’) IS NOT NULL PRINT ‘<<< FAILED DROPPING PROCEDURE dbo.heater >>>’ ELSE PRINT ‘<<< DROPPED PROCEDURE dbo.heater >>>’ END go create procedure heater @limit int as begin set nocount on declare @error                    int, @procname                 varchar(35), @tc                       int, @rc                       int, @rs                       int, /* Timing the process */ @elapsed_time1            datetime, @elapsed_time2            datetime, @snapdate                 datetime, @upd_stats_diff           int, @seconds                  numeric (10,4), /* Counting rows. Declaring variables */ @total_deleted            int, @total_inserted           int, @total_updated            int, @inf_initial              int, @inf_final                int select @elapsed_time1=getdate() print “” print “Starting time: %1!. “, @elapsed_time1 print “” create table #heater (col1 int, col2 char(255), col3 char(255), col4 char(255)) declare @counter int select @counter =1 while (@counter < @limit ) begin insert into  #heater values (@counter, “AAAAAAAAAAAAAAAAA”, “BBBBBBBBBBBBBBBBBBBBBBB”, “CCCCCCCCCCCCCCCCCCCCCCCCCC”) select @counter = @counter + 1 if (@counter = @limit/2) begin select getdate() exec sp_helpsegment “logsegment” exec sp_helpsegment “default” exec sp_helpsegment “system” end end /* Timing the process */ select @elapsed_time2=getdate() select @seconds=datediff(ss, @elapsed_time1, @elapsed_time2) print “” print “Ending time: %1!. Elapsed time: %2! seconds”, @elapsed_time2, @seconds end go IF OBJECT_ID(‘dbo.heater’) IS NOT NULL PRINT ‘<<< CREATED PROCEDURE dbo.heater >>>’ ELSE PRINT ‘<<< FAILED CREATING PROCEDURE dbo.heater >>>’ go EXEC sp_procxmode ‘dbo.heater’,’unchained’ go

Sybase Upgrade to ASE 12.5.3

Even though this is for ASE12.5.3 in a specific company the methodology can be applied to any dataserver upgrade.

Prerequisites from Unix

–           You need to make sure you have sufficient space under /opt/sybase, need about 1.5GB to be on safe side. –           Need to create a volume to put the devices on e.g. /sybdevices/asyb01

Initial build

1)         create a new directory underneath /opt/sybase in which to put the binaries, e.g /opt/sybase/asyb01ds 2)         copy the tar ball ASE1253.tar from mjolner:/tore into this newly created directory, and untar it. 3)         vi the SYBASE.sh, SYBASE.csh and SYBASE.env files and change the servername to the server you are creating with the following command; e.g.      %s/asyb01ds/dsyb01ds/g 4)         Source the SYBASE.sh file with a “. SYBASE.sh” command and cd to $SYBASE/$SYBASE_ASE/bin 5)         Do an export DISPLAY to the PC you are working from. 6)         Type srvbuild to start building the new server. Create a dataserver, backupserver, monitor server, historical server, XP server and Job scheduler. 7)         Place the devices in the directory given e.g. /sybdevices/asyb01 and make the master device 100MB in size and the master database 50MB. Make the Sybsystemprocs device and database 200MB e.g. /sybdevices/a1syb/master.dat and /sybdevices/a1syb/sybsystemprocs.dat 8)         Assign the ports as per the listing in the Port assignments section of this document Port assignments: With the hostnames being that of the Unix machine where the server is installed. 9)         Do not worry about the error log path etc, this will all be configured later, just accept the defaults. 10)       Hit start and the Sybase server should create the various components, keep an eye out for any errors in the installation log.  Whne finished decline localizing the server and exit the srvbuild application.

Customisation for ******

These are the necessary steps to install the new ufjisyb scripts structure for a new Sybase server; All the files and directories that you need are contained in the tarball ufji_maint_scripts.tar in merak:/opt/sybase/ufjisyb so for a new server do the following; 1)         Create the directory /opt/sybase/ufjisyb/<servername> and untar the file into that directory. 2)         Rename the server to the required one and also rename the server directories in the log directory. 3)         In terms of modifying the scripts in /opt/sybase/ufjisyb/<servername>/scripts directory not much needs to be done; Runsql –               change the value of DSQUERY to the Sybase server name runsql_script –               change the value of DSQUERY to the Sybase server name RUN_ds.sh –     You need to change the value of the MASTER variable which point to the master device 4)         You need to alter the names on the various maintenance scripts e.g. capella.dbcc to CIDS_DEV_SQL1.dbcc 5)         You need to copy the existing config file $SYBASE/$DSQUERY.cfg into the new /opt/sybase/ufjisyb/<servername>/config directory. Do this before you begin using the start and stop scripts i.e after the binaries are in place 6)         In the /opt/sybase/ufjisyb/<servername>/logs directory change the servername directories to the new ones e.g. /opt/sybase/ufjisyb/asyb01ds/logs/asyb01ds 7)         Copy over the .common_settings file from /opt/sybase on an existing installation, if it is not already there. 8)         In addition it is necessary to create new goto files in the /opt/sybase directory in the format, goto_<servername> e.g goto_CIDS_DEV_SQL1 This file needs to be edited in the same way as goto_CIDS_DEV_SQL1 on merak is at the moment. This involves changing the value for DSQUERY, SYBASE and possibly the DBA_PROJ and DBA_PROJECT if necessary and any of the other SYBASE variables if required, say if it’s a new Sybase version. You can check the Sybase variables required against the SYBASE.sh file in the $SYBASE directory 9)         You need to move the entries in $SYBASE/interfaces file into /opt/sybase/interfaces on caph and then run the distribute_interfaces.sh script on caph to copy these new entries to all the Sybase servers. You then create a soft link from $SYBASE/interfaces on the machine you are creating the new Syabse server to point to /opt/sybase/interfaces. In addition you will need to arrange for these entries to be added to the sql.ini file in w:\sybase\ini\sql.ini. Ask a member of desktop support to do this. 10)       You also need to create the matching Login, Tables, Stored procs and Rules in the Sybase server This is done by running Maint_scripts_create_tabs_procs_rules_and_login.sql found in merak:/opt/Sybase/ufjisyb e.g from merak run isql –Usa –P –Sasyb01ds –i/opt/sybase/ufjisyb/Maint_scripts_create_tabs_procs_rules_and_login.sql 11)       You can now try stopping and starting the Sybase server using the srvstop.sh and srvstart.sh scripts. 12)       Create the following directory for dumping into; /sybdumps/<srvname> 13)       Once the Sybase server is up and running the logins can be migrated from the existing Sybase server and this is done as follows; –a)      From the old Sybase server run; CREATE TABLE tempdb..t_syslogins ( suid        smallint      NOT NULL, status      smallint      NOT NULL, accdate     datetime      NOT NULL, totcpu      int           NOT NULL, totio       int           NOT NULL, spacelimit  int           NOT NULL, timelimit   int           NOT NULL, resultlimit int           NOT NULL, dbname      varchar(50)       NOT NULL, name        varchar(50)       NOT NULL, password    varbinary(30) NULL, language    varchar(30)   NULL, pwdate      datetime      NULL, audflags    int           NULL, fullname    varchar(30)   NULL, srvname     varchar(30)   NULL, logincount  smallint      NULL ) go insert into tempdb..t_syslogins select * from master..syslogins where name not in (“sa”,”sybufji”,”probe”) go select * into tempdb..t_syssrvroles from master..syssrvroles where srid > 25 go select r.* into tempdb..t_sysloginroles from master..sysloginroles r, master..syslogins l where l.suid=r.suid and l.name not in (“sa”,”sybufji”,”probe”) go –b)      Create the following tables in tempdb on the destination dataserver; CREATE TABLE tempdb..t_sysloginroles ( suid   smallint NOT NULL, srid   smallint NOT NULL, status smallint NOT NULL ) go CREATE TABLE tempdb..t_syssrvroles ( srid       smallint      NOT NULL, name       varchar(30)   NOT NULL, password   varbinary(30) NULL, pwdate     datetime      NULL, status     smallint      NULL, logincount smallint      NULL ) go CREATE TABLE tempdb..t_syslogins ( suid        smallint      NOT NULL, status      smallint      NOT NULL, accdate     datetime      NOT NULL, totcpu      int           NOT NULL, totio       int           NOT NULL, spacelimit  int           NOT NULL, timelimit   int           NOT NULL, resultlimit int           NOT NULL, dbname      varchar(50)   NOT NULL, name        varchar(50)   NOT NULL, password    varbinary(30) NULL, language    varchar(30)   NULL, pwdate      datetime      NULL, audflags    int           NULL, fullname    varchar(30)   NULL, srvname     varchar(30)   NULL, logincount  smallint      NULL ) go –c)      bcp the tables from old to new with the following commands; bcp tempdb..t_syslogins out /sybdumps/t_syslogins.out –Usybufji -S<oldserver> -c -t \, -r \: bcp tempdb..t_syssrvroles out /sybdumps/t_syssrvroles.out –Usybufji -S<oldserver> -c -t \, -r \: bcp tempdb..t_sysloginroles out /sybdumps/t_sysloginroles.out –Usybufji -S<oldserver> -c -t \, -r \: –then bcp tempdb..t_syslogins in /sybdumps/t_syslogins.out –Usybufji -S<newserver> -c -t \, -r \: bcp tempdb..t_syssrvroles in /sybdumps/t_syssrvroles.out –Usybufji -S<newserver> -c -t \, -r \: bcp tempdb..t_sysloginroles in /sybdumps/t_sysloginroles.out –Usybufji -S<newserver> -c -t \, -r \: –d)      Next move the logins from the temp tables to the main tables with the following commands run from the destination server; sp_configure “allow_updates”,1 go alter table tempdb..t_syslogins add procid int NULL go update master..syslogins set suid=3003 where suid=3 go update master..sysloginroles set suid=3003 where suid=3 go insert into master..syslogins select * from tempdb..t_syslogins go insert into master..syssrvroles select * from tempdb..t_syssrvroles go insert into master..sysloginroles select * from tempdb..t_sysloginroles go sp_configure “allow updates”,0 go –e)      You should now be able to log into the server using the old login and password

Configure Server parameters

This is a bit tricky as ASE12.5 handles memory differently from ASE12, so the best way to do this is to look at the old ASE 12. Basically everything that is not default can be copied over to the new config file except for the following; 1)         Set “max memory” parameter to be the same as the old “total memory” value, also set the parameter “allocate max shared memory” to 1 so that all the memory is allocated at start up time. 2)         procedure cache is now a fixed amount and not a percentage so take a 10% value of the max memory to start off with. 3)         It might be an idea to use cache partitioning or even named caches. For Production tempdb can be bound to its own tempdb cache thus isolating it from the standard default data cache.e.g. a)         EXEC sp_cacheconfig ‘tempdb_cache’,’1000.000M’,mixed,strict go EXEC sp_poolconfig ‘tempdb_cache’,’1000M’,’2K’ go EXEC sp_poolconfig ‘tempdb_cache’,’200M’,’16K’ go EXEC sp_poolconfig ‘tempdb_cache’,’2K’,’wash=60M’ go b)   sp_bindcache tempdb_cache,tempdb c)   sp_cacheconfig tempdb_cache,”cache_partition=4″ 4)         Set the following parameters to be the same as the original; number of open database, number of open indexes, max network packet size, number of devices, number of worker processes, max parallel degree, max scan parallel degree,  additional network memory, max online engines, identity burning set factor, number of user connections, stack size, number of locks, xp_cmdshell context, cis bulk insert batch size, max cis remote connections, cis packet size, cis rpc handling 5)         Stop and start the Sybase server to initialise all these settings.

Create user databases

1)         First prevent the master device from being used in create database commands with the following sql command; EXEC sp_diskdefault ‘master’,defaultoff 2)         The first database to alter is tempdb, so create a 1GB data device and 1GB log device and alter tempdb onto these. You will need to remove the logsegment form the data device afterwards. 3)         Create the other devices based on what exists in the existing Sybase server, suggested naming convention for the device files is; <servername>_data/log<nr> e.g asyb01ds_data1 4)         Once the devices have been created you can create the database using the for load option. 5)         Dump the database from the old server and load into the new 6)         Apply the various database options and database owners from the old server.

New ASE12.5.3 features

1)         Compressed dumps will be introduced. the results from tests are very impressive; Basically an uncompressed dump of bildb takes 1 min 30s A compressed dump which compresses smaller than using the compress command takes 1 min 40 s (Sybase compression factor of 2) To compress the uncompressed dump with compress takes 2 min 10s To compress with gzip takes 4 minutes To dump a compressed dump to smaller than gzip takes a combined time of 4 min 05s. Sybase compression factor of 6 Dump plus compress              3 min 40s Dump plus  gzip                      5min 25s Compressed dump                 4 min 05s Therefor for the nightly dumps there is a choice between speed and size. If you don’t care about size then compression ratio of 2 is the best but if you want to keep the current compressed dump size then ratio 6 is the best but it does mean that the time to dump all the databases increases. The time to load both an uncompresse and compressed dump is exactly the same I would advise using a compression ratio of 2 and granting Sybase a bit more space in the /sybdumps, approx 17% more required, that way the dumps will only take a bit longer whereas restoring from dumps will bemuch quicker than before. The syntax to dump a file is e.g. dump database bildb to “compress::2::/sybdumps/bildb_uncompressed.dmp” To load a compressed database it is e.g. load database bildb from “compress::/sybdumps/bildb_uncompressed.dmp”

Sybmigrate utility

This utility allows you to migrate from a 2K page size for the server to 4K, 8K or even 16K. It can only work between 2 ASE12.5 servers though so I will keep a 2K ASE12.5.3 Sybase server available for this process. Steps to perform the migration; 1)         Load the database into the ASE12.5.3 2K dataserver, currently asyb01ds on Unix machine stella. Also create a corresponding blank database on the 4k (target) server with the same name as the source database. You also need to create the same users, DBO etc in the target database as in the source database with corresponding logins. It is assumed that the logins and roles have already been migrated from the source dataserver to the destination server as described earlier. Also set the target database to have select * into option switched on. 2)         On the target dataserver ASE12.5.3 2k you need to set up target dataserver as a remote server as follows in isql and also add it to the interfaces file; sp_addserver <source dataserver> go also sp_addremotelogin <source dataserver> go 3)         On the Source dataserver make sure the target dataserver is in the interfaces file and; sp_addserver <target dataserver> go 4)         On both servers type sp_configure “allow remote access”,1 go 5)         If you get any problems try dropping and recreating the servers,remotelogin a few times. Also make sure the following entry is in the goto files NN_SYB_PACKETSIZE=512 Test that it all works by typing exec <target server>…sp_who from the source dataserver 6)         Make sure that the following parameters are the same on both servers; cis bulk insert array size cis bulk insert batch size cis connect timeout cis cursor rows cis packet size cis rpc handling 7)         from the command line on the source dataserver type sybmigrate a) The target database is the 4k server, e,g, mjolner_4k_1253ds b) The source is the 2k dataserver e.g. asyb01ds c) The login HAS TO BE “sa” for it to work 8)         First of all you need to click on “setup source database for migration”, a)         Specify the database you want to migrate e.g. murex2000db b)         Click Add path then next c)         On the next screen select a logical device with enough space to house the repository database, and click setup d)         The setup will now start, so check the output log as it will tell you if there are any errors or config parameters which need to be changed to allow the migration to work. 9)          When the setup is finished and the log shows the setup  as complete click on OK to take you back to the main menu. 10)        Select “Migrate database objects and data” and click next a)         On the next screen click on migrate to start the migrate process from the source database to the target. b)         Again check the log for any errors, if there are any carry out the actions suggested and rerun the migrate.

Sybase & MSSQL Interview Questions

ASE: Q) What are the main advantages and disadvantages of using identity columns? A) The advantage is in the ordering of the table and creation of unique rows, disadvantage is identity gaps. Q) How to bcp out a subset of rows from a large table? A) Create a view with the subset and bcp out via it. Q) What does set forceplan on do? A) Forces the join order for queries to be the order in which the tables appear in the from clause. Q) How do you find out which process has filled up tempdb? A) select * from master..syslogshold but will need to create an additional temporary database and bind it to your login first. Q) What are the options for doing an upgrade to a Sybase server? A) Can do a Sybase upgrade but much better to create a new server and migrate over the databases/logins etc. Q) What does the term covered query mean? A) It is if the columns specified in a query (columns to be returned plus columns in where clauses) are completely satisfied by an index. Q) What is the difference between an MRU and an LRU caching strategy? A) It stands for most recently used and least recently used and defines which records are swapped out first, the default is LRU i.e. the pages least recently used get flushed out when more space is needed. MRU is referred to as fetch and discard and is ideal for queries where a page is only needed once by the query for example table scans on heap tables. Q) Why would you want to use an MRU index? A) MRU is ideal for queries where a page is only needed once by the query for example table scans on heap tables or where you have more than one table in a join query. Q) In Adaptive Server 11.9.2 or earlier explain the difference between a character and a block device and when they are typically used or could be used A) A character or un-cooked/raw device writes directly to the disk and guarantees transactions are written to the disk immediately. A block or cooked device does not write to the disk immediately, which can cause the database to lose its integrity should there be a media failure. Character devices are normally used for live databases. Using block devices in Tempdb can in some circumstances improve performance for applications that create a lot of temporary tables. Q) Briefly explain what the difference between a transient and hard 605 error. A) A transient error: Operating system notifies SQL Server that an I/O operation is complete when it is not. Any time buffer information about a page is different from the disk information about the page (and the disk has the correct information).  ~ Normally a dbcc checkdb and dbcc checkalloc should produce no more 605 errors. A hard error: Normally hardware related can be caused by in-correctly formatted disks (overlapping partitions) or disk media failure. Q) Explain the difference between slow and fast BCP and how to apply this. A) A slow BCP leaves indexes in place and is the default.     Fast BCP involves dropping the indexes/triggers for the table in question. In fast BCP the indexes are recreated after the data has been loaded in. The BCP –A flag can be used to increase the default Network packet size. This can increase the speed of a BCP, however the under-lying UNIX o/s needs to be changed to support the larger packet size.  In addition you would need to allocate additional network memory in the Sybase dataserver which is used by the bigger network packets. Replication server: Q) What is the Save interval used for pertaining to the Stable Queue. A) It determines how long the stable queue will wait until it deletes information already applied to the replicate db. It is specified using rs_configure save_interval(in minutes). It can be used for redundancy in the event of a replication system failure. The save interval lets the sending Replication Server re-create all messages saved during the save interval. Q) What two Replication Server parameters should always be increased from the default values. A) The 2 most important are memory_limit which sets the total memory available to the Replication server and sqt_max_cache_size which sets the limit of cache which is available to the stable queue Q) When a DSI Thread suspends due to a transactional error, what are the two methods used to force resumption of the queue. A) Resume connection to dataserver.database skip transaction set autocorrection on for replication_definition with replicate at data_server.database Q) What Replication Server command shows all the configuration parameters pertaining to a Replication Server. A) rs_configure – run in RSSD for Replication Server. Q) What is the ID Server in a replication environment? A) It is the master replication server which holds information on all the other replication servers. Q) What happens if the ID Server goes down in a multi repserver environment. A) It runs ok until you try and update any rep defs etc or restart any of the other replication servers. Q) How would you find what command caused a dsi to go into suspend? A) Check the rep server error log. Q) If the error log indicates that dsi is down due to duplicate key error what do you do? A) First do a sysadmin log_first_tran to output the bad transaction, it can then be viewed with rs_helpexception. If you decided it can be skipped then do a resume connection with skip_transaction. Q) What is autocorrection? A) It is an option set on a rep def which converts updates and inserts into a delete followed by an insert, this is useful in dev environments where you tend to sometimes get a lot of duplicate transactions. Q) How do you determine the time of the last transaction applied to the replicate database? A) rs_lastcommit Q) What is the easiest way to completely replicate a database from London to Singapore and New York? A) Use MSA, database replication definitions. Q) When was MSA (database replication definition) introduced? A) Version 12.6. Q) Does MSA replication work over a WAN? A) Yes I think so, I can’t see why not, warm standby works over WAN. MSSQL: Q) If someone has a problem with a spid, how would you investigate it? A) Use the activity monitor, and perhaps dbcc inputbuffer, dbcc opentran to show longest running tran. Q) How would you put a table completely into memory? A) You can’t Q) How do you change the default location of a database? A) You can specify the default location for data and log files in the server properties, database settings. Q) What is the difference between dbcc reindex and dbcc index (defrag)? A) dbcc reindex is an offline procedure, dbcc index (defrag) is performed online. Q) How would you check the statistics for a table? A) Run dbcc show_statistics against it. Q) What things would you configure on the analysis server, high level? A) Increase minimum allocated memory, increase read-ahead buffer size. Increase Process buffer size, enable Proactive caching, maybe increase maximum number of threads. Q) There are a few option which MSSQL server lets you set which shouldn’t be set on a production server? A) You should NOT enable Auto Shrink or Auto Close and a production database should be set to full or bulk logged recovery model NOT simple. Q) What services can be clustered in SQL Server 2008? A) SQL Server, SQL Server agent, analysis services, reporting services. Q) How can you set the memory of the server to not go above 3GB? A) In server properties set both the minimum and maximum server memory to 3GB. Q) How would you move the tempdb? A) Execute the following 2 commands: alter database tempdb modify file (name=tempdev|templog, filename=’c:\Sqldata\tempdb.mdf|templog.ldf’), then restart the server and the tempdb will be created on the new devices in the new location, you can then remove the old devices. Q) In SQL Server 2005 what is port 1434 used for? A) The SQL Server browser uses, by default, port 1434 to listen for incoming connections. Q) Where is logging information kept within sql server? A) You can access the logs from the SQL Server management Studio, both SQL Server, System and SQL Server Agent logs can be viewed but the actual physical logs are on the file system. Q) How do you start a SQL Server form the command prompt in single user mode? A) To start the default instance it’s sqlserver.exe –m for a named server it is sqlserver.exe –m –s <servername>. Q) What are the differences between clustered and non clustered indexes? A) A clustered index forms and orders the table where the leaf level is the actual data page whereas a non-clustered index is separate from the table and just points to the relevant pages within the table. Q) How would you display fragmentation information for a specific table? A) Use dbcc showcontig command against the table or index. Q) What command would you use to check the integrity of a database, what option to check the physical allocation only? A) You would use dbcc checkdb and if you only want to check the physical allocation then you add the Physical_only option, i.e dbcc checkdb (‘dbname’) with physical_only. Q) What is locking and how would you get information on locks? A) Locking is the process of ensuring data consistency in a database, so for example as one process is updating a row of data it will not allow another row to update it at the same time. You can get information on locks by using the sp_lock stored proc, querying the dm_tran_locks DMV and for more detail by running SQL Profiler. Q) What is blocking and how would you check for it? A) Blocking is a result of locking where one process holds resources that the other needs, it can be identified by using the sp_who stored proc, and drilling down with sp_lock, querying the dm_tran_locks DMV and for even more detail by running SQL Profiler. Q) What are traceflags and when would you use them? A) Trace flags are used to set specific server characteristics. For example it can be used to print out extra information about deadlocks with traceflag 1204 and 1222. They are either set at startup with the –T option or can be explicitly started with dbcc traceon(<traceflag number>). Q) What are dynamic management views? A) DMV’s were introduced in SQL Server 2005 and provide a simple way for accessing the state of a server and gathering diagnostic information without too much overhead on the SQL Server. Q) What are the benefits of SQL Server 64 bit architecture? A) The main benefit is that you can use much more memory than you could on 32bit and also you can take advantage of 64 bit Windows server. Q) What is the dedicated admin port and when would you use it? A) The Dedicated Admin Connection listens on a special port and allows an administrator to connect to an otherwise unresponsive server due to lack of user connections for example. It is accessed by running sqlcmd –A or by prefixing admin: to the instance name in SSMS. Q) What is the Distributed Transaction Co-ordinator (DTC) used for. A) The DTC manages transactions across a network of Windows based systems. MS DTC allows: – Distributed transaction support for Microsoft SQL Server. – Updated data residing on two or more Microsoft SQL Server systems.