Installing a Sybase ASE15 Server

The following points apply for a new installation or an upgrade to an existing server. With an upgrade just create the new filesystem side by side the existing installation and migrate over the logins, config settings, user databases etc. In an upgrade the ideal is if you have enough space to run the 2 servers in parallel but if not then at least create the bare bones Sybase server (system databases) and then drop a user database on the existing sybase server, along with devices, and recreate them for the new server. It is important that you create environment variable for the new server which do not include any reference to the old server so you will need to hack the .cshrc and SYBASE.csh a little bit so that when you source the new .cshrc it loads the environment variables from scratch. Refer to the FPS Sybase server .cshrc file for examples. Also it is important that you give the new server a different port number from the old server. 1)      You first need to make sure that the Unix machine is ready and has all the required patches installed etc. 2)      The next step is to get the filesystem (5GB) and various raw devices created by Unix. You will need as an example at least the following raw devices in place before installing the Sybase server; /dev/vg_kapaks/rlv_master  (Mbytes) 224 /dev/vg_kapaks/rlv_sybsystemdev  (Mbytes) 128 /dev/vg_kapaks/rlv_sysprocsdev  (Mbytes) 320 /dev/vg_kapaks/rlv_temp01  (Mbytes) 2016 If you are doing an upgrade and space is limited then you could make the tempdb device smaller and then create a larger one later on. 3)      Next you need to install the actual server and for this you take the cd/dvd/tarball from Sybase and place it into $SYBASE/software. $SYBASE being the directory where you have decided to install Sybase, which must be different form any existing SYBASE installation. Unzip and extraxt the files. 4)      After you have untarred it you first set the SYBASe variable to where you want to install the server, e.g setenv SYBASE /progs/kapaks/sybase_15 and then run the setup by first starting the X server on your pc and then typing setup. You can also run it without a gui by typing setup –console. During the install you will be asked various questions including information about the license, just continue without license for now, we will put in the license later as we get a 30 day trial anyway. Choose Full installation, Enterprise edition, Server license and when asked say continue installation without license. Also answer no to configure email alerts, this can be done later if needed. Once all the software has been unloaded answer NO to remember ASE plugin password then you will be asked if you want to build the servers, go through the various fields filling in the relevant information and click on build. I would suggest initially building just the dataserver, backupserver and XP server. Other servers can be built later if required. Choose to custom configure all the servers. If this is a migration from an existing server remember to use the same port numbers as before and also to name the servers the same. Use the following answers, –          Answer Mixed to application type. –          2k to page size, VERY IMPORTANNT if this is a migration otherwise maybe choose 4k               for a completely new sybase server installation. –          Answer NO to enable PCI question. –          NO to optimise ASE configuration. –          Other values are fairly obvious hopefully. 5)      Hopefully this all goes fine and you now have a working Sybase server. You need to check whether the Sybase server needs to be localised to the Norwegian language, it almost certainly has to be. If you are migrating a server then you can easily check this in the old servers log file by looking for the line “SQL Server’s default sort order is”, if it says nocase_iso_1_nor then you need to install it for the new server. If it does then the next step is now to localise it to the Norwegian language and this is done by the following steps, if not you can jump over this step: –          First source the SYBASE.csh file by doing source ~/SYBASE.csh –          Copy the $SYBASE/charsets/iso_1/nocase_iso_1_nor.srt file over from another existing server and place it in the same folder on the new server i.e into  $SYBASE/charsets/iso_1/ folder and make sure it is readable by all. –          Start $SYBASE/ASE-15_0/bin/sqlloc (having first started exceeed on your pc) –          Select your Sybase server and enter the sa password, and then change the sort order to “Dictionary order, case insensitive, modified for Norwegian” and if necessary the default character set to iso_1 and click OK a few times. –          If the sqlloc application fails you can use sqllocres –r <resource file> (sqlloc.rs). The resource file can be found in $SYBASE/ASE-15_0/init/sample_resource_files. Just change the template resource file by putting in the Sybase server name, sa password and putting in sqlsrv.default_characterset: iso_1 and sqlsrv.sort_order: nocase_iso_1_nor 6)      The above step should be enough but some servers have an extra language module installed, for no apparently good reason. If you are migrating from an old server then you can check for this in the master..syslanguages table. If there is a row called Norwegian then you can install this on the new server by running: sp_addlanguage norwegian, norsk,                                                                                                                         ‘January,February,March,April,May,June,July,August,September,October, November, December’, ‘Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec’, ‘Monday,Tuesday,Wednesday,Thursday,Friday,Saturday,Sunday’, mdy,  7 7)      The next step is to extend the temporary database onto it’s own device 1GB is a good starting point, and you could also consider creating a few extra temporary databases. This is not necessary if you are installing Sybase ASE15.0.3 which asks for a special temporary device at install. You should at the least consider creating a special temporary database to be used by logins with the sa_role. 8)      You should probably put in a sa login password at this point for added security, you do this by logging into the server and typing sp_password NULL, <new password>. After this is done place the password into the $SYBASE/$SYBASE_ASE/install/$DSQUERY file. 9)      You now need to adjust various server parameters like memory, number of open objects etc, take these from an existing Sybase server and adjust as needed. The easiest way to do this is to look at the existing servers .cfg file and do something like a cat DHT_KAPAKS.cfg | grep -v DEFAULT to find the non default values. Ignore the sections for monitoring and security related, these will be set when installed later on. Also ignore the bit about buffer pools, you create this after the Sybase server is rebooted with the new parameters so do this now, just set the buffer pools to the same values on the server being migrated from. Also set the parameter number of open partitions to the sum of number of open objects and number of open indexes. 10)   At this point you can migrate over logins if you are doing a migration from an existing server. This will wary a bit but for example if you are upgrading from Sybase version 12 to 15 then you would do the following steps, for other ASE versions you might need to change the temp table a bit: –Create the temporary syslogins table on the new server sp_configure ‘allow updates to system tables’,1 go USE tempdb go –drop table dbo.temp_syslogins –go CREATE TABLE dbo.temp_syslogins ( suid        int      NOT NULL, status      smallint      NOT NULL, accdate     datetime      NOT NULL, totcpu      int           NOT NULL, totio       int           NOT NULL, spacelimit  int           NOT NULL, timelimit   int           NOT NULL, resultlimit int           NOT NULL, dbname      varchar(30)   NULL, name        varchar(30)   NOT NULL, password    varbinary(30) NULL, language    varchar(30)   NULL, pwdate      datetime      NULL, audflags    int           NULL, fullname    varchar(30)   NULL, srvname     varchar(30)   NULL, logincount  smallint      NULL, procid           int                NULL ) LOCK ALLPAGES Go n  You may need to tweak the column types depending on which version of Sybase you are importing from, check ther source sysloginroles table to double check. — Now bcp in the logins which was bcp’d out on the old Sybase server — bcp tempdb..temp_syslogins in tore_syslogins.out -Usa -P -SMICOS2 -n — Alter the table to add the new columns alter table tempdb..temp_syslogins add lastlogindate datetime NULL add crdate datetime NULL add locksuid int NULL add lockreason int NULL add lockdate datetime NULL go — Delete the sa and probe logins from the the temp_syslogins table use tempdb go delete from tempdb..temp_syslogins where name in (“sa”, “probe”) go — Delete existing logins which match by name delete from tempdb..temp_syslogins where name in ( select t.name from tempdb..temp_syslogins t, master..syslogins s where t.name =s.name ) go — Increase the suid’s. ONLY necessary if you are merging multiple sybase servers into one. update tempdb..temp_syslogins set suid = suid + 6150 go — Now copy the syslogins over to the master..syslogins table insert into master..syslogins select * from tempdb..temp_syslogins go — Create the sysloginroles table USE tempdb go CREATE TABLE dbo.temp_sysloginroles ( suid   smallint NOT NULL, srid   smallint NOT NULL, status smallint NOT NULL ) LOCK DATAROWS WITH EXP_ROW_SIZE=1 ON system Go n  You may need to tweak the column types depending on which version of Sybase you are importing from, check ther source sysloginroles table to double check. — BCP in the data — bcp tempdb..temp_sysloginroles in c1p16_sysloginroles.out -Usa -P -SMICOS2 -n — remove the roles for sa and probe logins delete from tempdb..temp_sysloginroles where suid <=2 go — Alter the table to make it compatible with ASE15 alter table tempdb..temp_sysloginroles modify suid int not NULL modify srid int not NULL go — Increase the suid’s ONLY necessary if you are merging multiple sybase servers into one. update tempdb..temp_sysloginroles set suid = suid + 6150 go — Copy the roles into master.sysloginroles insert into master..sysloginroles select * from tempdb..temp_sysloginroles go — The next steps relate to synchronising the suids after you have loaded the old              –database into the new server . –First on the original server check whether there are any aliases set up or whether        –there are any users who have a different name to their syslogin name with the             –following on the original Sybase database. select * from sysalternates go select l.suid,u.suid,l.name, u.name from master..syslogins l, sysusers u where l.suid = u.suid and l.name != u.name go –The following will resync the suids in a user database with the suid’s in syslogins. update sysusers set u.suid = l.suid from sysusers u, master..syslogins l where l.name=u.name 11)   You now need to set up the environment variables in .profile and .cshrc and also create the servername file containing the sa password, if you haven’t already done so, in the $SYBASE/ASE-15_0/install/ directory e.g for a Sybase server called MICOS1 this file would be called MICOS1 and would only contain the sa password, this is needed for the scripts to work. You can copy the environment variables from an existing Sybase server installation and just modify the values to your needs. 12)   You should now take a look at the $SYBASE installation folder and make sure that the interfaces, .cshrc , SYBASE.sh and SYBASE.csh files are all world readable. 13)   If this is a brand new machine then the next step is to generate the required licenses (always choose Un-served License). If this is just one server on an unclustered server, for example a test server, then that is fairly easy and the license file can be generated, (from https://sybase.subscribenet.com/control/sybs/login?nextURL=%2Fcontrol%2Fsybs%2Findex. If it is a clustered server then you need to create a multimode license file for each machine in the cluster and place it underneath the SYSAM folder. You create a multimode license file by answering “2” to the question of “Number of machines to license”. You then enter both the machines it can run on and generate and download the license file. The last step, for clustered and non-clustered Sybase servers is to place the licenses file under $SYBASE/$SYBASE_SYSAM2/licenses and if you want update the value for LM_LICENSE_FILE in the .profile and/or .cshrc file to point to it although this should not be necessary. 14)   Next we need to install the various standards that you migh thave, first you need to create the scripts directory (sc) etc, so just copy this from an existing server installation and place it in an appropriate place, best thing is to use a tar ball for this and just get rid of the unnecessary files afterwards like logs etc. 15)   Next we create the additional Sybase devices from the raw devices on Unix. 16)   The next step is to install several specific stored procedures etc: –          First create a new database called syb_drift with 20MB data and 5MB log CREATE DATABASE syb_drift ON data01=’20M’ LOG ON logg01=’5M’ go USE master go EXEC sp_dboption ‘syb_drift’,’trunc log on chkpt’,true go –          Type: cdhs cd sc inst (choose option 1 to install sybdrift, enter the sa password but answer no to                        “Vil du                 installere crontab” and “Vil du teste konsistens-sjekk, backup og                              overvaaking?”). –          Next you need to install the relevant sp_thresholdaction stored procedure and this depends on whether or not you are using SQL Backtrack to backup the databases. If you are using SQL Backtrack then choose the following stored proc: USE sybsystemprocs go IF OBJECT_ID(‘dbo.sp_thresholdaction’) IS NOT NULL BEGIN DROP PROCEDURE dbo.sp_thresholdaction IF OBJECT_ID(‘dbo.sp_thresholdaction’) IS NOT NULL PRINT ‘<<< FAILED DROPPING PROCEDURE dbo.sp_thresholdaction >>>’ ELSE PRINT ‘<<< DROPPED PROCEDURE dbo.sp_thresholdaction >>>’ END go USE sybsystemprocs go IF OBJECT_ID(‘dbo.sp_thresholdaction_old’) IS NOT NULL BEGIN DROP PROCEDURE dbo.sp_thresholdaction_old IF OBJECT_ID(‘dbo.sp_thresholdaction_old’) IS NOT NULL PRINT ‘<<< FAILED DROPPING PROCEDURE dbo.sp_thresholdaction_old >>>’ ELSE PRINT ‘<<< DROPPED PROCEDURE dbo.sp_thresholdaction_old >>>’ END go create procedure sp_thresholdaction_old @dbname         varchar(30), @segmentname    varchar(30), @space_left     int, @status         int as declare @devname varchar(100), @before_size int, @after_size int, @before_time datetime, @after_time datetime, @error int if @segmentname != (select name from syssegments where segment = 2) begin print “THRESHOLD WARNING: database ‘%1!’, segment ‘%2!’ at ‘%3!’ pages”, @dbname, @segmentname, @space_left end go EXEC sp_procxmode ‘dbo.sp_thresholdaction_old’, ‘unchained’ go IF OBJECT_ID(‘dbo.sp_thresholdaction_old’) IS NOT NULL PRINT ‘<<< CREATED PROCEDURE dbo.sp_thresholdaction_old >>>’ ELSE PRINT ‘<<< FAILED CREATING PROCEDURE dbo.sp_thresholdaction_old >>>’ go create procedure sp_thresholdaction @dbname         varchar(30), @segmentname    varchar(30), @space_left     int, @status         int as declare @devname varchar(100), @before_size int, @after_size int, @before_time datetime, @after_time datetime, @error int, @cmd1 varchar(5000), @cmd2 varchar(5000) set @cmd1 = ‘$DT_SBACKTRACK_HOME/bin/dtsbackup ${DTPHYSICAL}/’ + @dbname + ‘ -log_only | tee $BACKUPKAT/ch/backtrack_logg’ set @cmd2 = ‘$BACKUPKAT/sc/m_sback_log ‘ + @dbname –if @segmentname != (select name from syssegments where segment = 2) begin print “THRESHOLD WARNING: database ‘%1!’, segment ‘%2!’ at ‘%3!’ pages”, @dbname, @segmentname, @space_left exec xp_cmdshell @cmd1 exec xp_cmdshell @cmd2 exec xp_cmdshell @cmd1 end go EXEC sp_procxmode ‘dbo.sp_thresholdaction’, ‘unchained’ go IF OBJECT_ID(‘dbo.sp_thresholdaction’) IS NOT NULL PRINT ‘<<< CREATED PROCEDURE dbo.sp_thresholdaction >>>’ ELSE PRINT ‘<<< FAILED CREATING PROCEDURE dbo.sp_thresholdaction >>>’ go If however you are not using SQL Backtrack install the following version of sp_thresholdaction: USE sybsystemprocs go IF OBJECT_ID(‘dbo.sp_thresholdaction’) IS NOT NULL BEGIN DROP PROCEDURE dbo.sp_thresholdaction IF OBJECT_ID(‘dbo.sp_thresholdaction’) IS NOT NULL PRINT ‘<<< FAILED DROPPING PROCEDURE dbo.sp_thresholdaction >>>’ ELSE PRINT ‘<<< DROPPED PROCEDURE dbo.sp_thresholdaction >>>’ END go USE sybsystemprocs go IF OBJECT_ID(‘dbo.sp_thresholdaction_old’) IS NOT NULL BEGIN DROP PROCEDURE dbo.sp_thresholdaction_old IF OBJECT_ID(‘dbo.sp_thresholdaction_old’) IS NOT NULL PRINT ‘<<< FAILED DROPPING PROCEDURE dbo.sp_thresholdaction_old >>>’ ELSE PRINT ‘<<< DROPPED PROCEDURE dbo.sp_thresholdaction_old >>>’ END go create procedure sp_thresholdaction_old @dbname         varchar(30), @segmentname    varchar(30), @space_left     int, @status         int as declare @devname varchar(100), @before_size int, @after_size int, @before_time datetime, @after_time datetime, @error int if @segmentname != (select name from syssegments where segment = 2) begin print “THRESHOLD WARNING: database ‘%1!’, segment ‘%2!’ at ‘%3!’ pages”, @dbname, @segmentname, @space_left end go EXEC sp_procxmode ‘dbo.sp_thresholdaction_old’, ‘unchained’ go IF OBJECT_ID(‘dbo.sp_thresholdaction_old’) IS NOT NULL PRINT ‘<<< CREATED PROCEDURE dbo.sp_thresholdaction_old >>>’ ELSE PRINT ‘<<< FAILED CREATING PROCEDURE dbo.sp_thresholdaction_old >>>’ go create procedure sp_thresholdaction @dbname         varchar(30), @segmentname    varchar(30), @space_left     int, @status         int as declare @devname varchar(100), @before_size int, @after_size int, @before_time datetime, @after_time datetime, @error int, @cmd1 varchar(255) select @cmd1 = ‘$BACKUPKAT/sc/dump_t ‘ + @dbname –if @segmentname != (select name from syssegments where segment = 2) begin print “THRESHOLD WARNING: database ‘%1!’, segment ‘%2!’ at ‘%3!’ pages”, @dbname, @segmentname, @space_left exec xp_cmdshell @cmd1 end go EXEC sp_procxmode ‘dbo.sp_thresholdaction’, ‘unchained’ go IF OBJECT_ID(‘dbo.sp_thresholdaction’) IS NOT NULL PRINT ‘<<< CREATED PROCEDURE dbo.sp_thresholdaction >>>’ ELSE PRINT ‘<<< FAILED CREATING PROCEDURE dbo.sp_thresholdaction >>>’ go 17)   The next step is to configure the new mda montables and that is done as follows: sp_configure “enable monitoring”,1 go sp_configure “SQL batch capture”,1 go sp_configure “max SQL text monitored”,100000 go sp_configure “sql text pipe active”,1 go sp_configure “sql text pipe max messages”,10000 go sp_configure “object lockwait timing”,1 go sp_configure “per object statistics active”,1 go sp_configure “statement cache size”, 10000 go sp_configure “enable stmt cache monitoring”,1 go sp_configure “deadlock pipe max messages”, 10000 go sp_configure “deadlock pipe active”, 1 go sp_configure “errorlog pipe active”,1 go sp_configure “errorlog pipe max messages”,10000 go sp_configure “wait event timing”,1 go sp_configure “statement statistics active”,1 go sp_configure “process wait events”, 1 go sp_configure “plan text pipe active”, 1 go sp_configure “plan text pipe max messages”, 10000 go sp_configure “statement pipe max messages”, 10000 go sp_configure “statement pipe active”,1 go 18)   Install any extra stored procedures, these can be found from an existing Sybase server. E.g sp__mda_hot_tables, sp_mda_io etc. 19)   At this stage you can create (for load) and start loading the user databases, using sql backtrack remote load procedure, for example from existing sybase server run; dtsrecover /home/solo1/datatools/sbackups.physical/dhp_SOLP/commissiondb -server TORIGODB -database commissiondb –user sa –password <pass> -copyover Remember to perform  the suid resync section above. If you haven’t done so previously  you need to first create all the required sybase data devices from the raw partitions which should already be on the Unix machine. 20)   Install dbccdb –          Run sp_plan_dbccdb to find the recommended size of the dbccdb and create it with for example: USE master go CREATE DATABASE dbccdb ON data11=’1500M’ LOG ON log01=’500M’ go USE master go EXEC sp_dboption ‘dbccdb’,’trunc log on chkpt’,true go USE dbccdb go CHECKPOINT Go –          Run the scripts/installdbccdb script in Unix using isql. –          Run the following sql with the value being the max number for processes displayed by sp_plan_dbccdb: sp_configure “number of worker processes”, 6 go –          If you haven’t already done so earlier create a 150MB 16k memory pool in the cache used by dbccdb, usually just default data cache. EXEC sp_poolconfig ‘default data cache’,’150M’,’16K’ go EXEC sp_poolconfig ‘default data cache’,’16K’,’wash=30M’ Go –          Create the workspaces from the maximum recommended values as follows: Use dbccdb Go sp_dbcc_createws dbccdb, ‘default’, scan_dbccdb, scan , ‘750M’ go sp_dbcc_createws dbccdb, ‘default’, text_dbccdb, text , ‘200M’ go –          Configure dbccdb for each of the user databases as follows, just use the max values for all the db’s: sp_dbcc_updateconfig tlfbank, ‘max worker processes’, ’12’ go sp_dbcc_updateconfig tlfbank, ‘dbcc named cache’, ‘default data cache’,                                          ‘200M’ go sp_dbcc_updateconfig tlfbank, ‘scan workspace’, scan_dbccdb go sp_dbcc_updateconfig tlfbank, ‘text workspace’, text_dbccdb go 21)   The next thing to install is auditing and unique sa logins and the details for this can be found in the document Auditing and Unique Logins. 22)   It is a good idea to create a second tempdb to reduce the load on the tempdb and that is done as follows: create temporary database tempdb2 on tempdb_dev2=’1024M’ log on tempdb_log=’524M’ go sp_tempdb ‘add’, ‘tempdb2’, ‘default’ go sp_tempdb ‘show’ go 23)   You will also need to install SQL Backtrack if that is required. 24)   When you come to load the actual databases at migration or upgrade time make sure that you run update index stats against all the user tables AND all the system tables except syslogs and sysgams otherwise SQL Backtrack will run very slowly. 25)   The final step is to set up all the cron jobs to run, here just look at what is currently configured on the existing or similar servers. In some cases (check with Unix sa) you need to create a copy of the crontab in the $HOME directory for clustering etc. You do this with a command similar to; crontab -l > $HOME/crontab.sybgunda where sybgunda is the username. 26) Another thing you may find is that SQL Backtrack stops working if you have changed the ip address of the machine, this might be because all the TSM (OBSI settings) have changed. To fix this you need to find out what the new settings are in the dsm.sys file and then probably create a new adsm dump pool.

How to drop a database when drop database fails

How to Drop a Database When drop database Fails

Follow the steps in this section to drop a database when drop database fails. Do not use these steps unless directed to do so by this book, or unless there is no critical data in the database.

1.Log in as the “sa”.

2.Check to make sure the database has been marked “suspect.” The following query produces a list of all databases which are marked suspect:

1> select name from master..sysdatabases

2> where status &256 = 256

3> go

3.If the database is marked “suspect”, go to step 4. If it is not marked “suspect”, mark it in one of the following ways:

a.Execute the sp_marksuspect stored procedure discussed under “How to Mark a Database “suspect””, and restart Adaptive Server to initialize the change.

b.Use the procedure below:

1> sp_configure”allow updates”, 1

2> go

1> use master

2> go

1> begin transaction

2> update sysdatabasesset status = 256

3> where name = “database_name”

4> go

Verify that only one row was affected and commit the transaction:

1> commit transaction

2> go

Reset the allow updates option of sp_configure:

1> sp_configure “allow updates”, 0

2>go

Restart Adaptive Server to initialize the change.

4.Remove the database:

1> dbccdbrepair(database_name,dropdb)

2> go

dbcc dbrepair sometimes displays an error message even though it successfully drops the database. If an error message occurs, verify that the database is gone by executing the use database_name command. This command should fail with a  911 error, since you dropped the database. If you find any other error, contact Sybase Technical Support.

How to move the master database to a new device

This error occurs when you try to extend the master database onto a device other than the master device. It is recommended that you keep user objects out of the master database. If you keep user databases off the master device, you allow space in case the master database needs to grow. In addition, if you ever need to rebuild the master device, it will be easier if it does not contain user databases. Adaptive Server users can move any “home-grown” system procedures that start with “sp_” to sybsystemprocs (by dropping them from the master database and creating them in sybsystemprocs). Extend the master database only if absolutely necessary! If you are sure you must increase the master database size and have no room on the current master device, use the following procedure to remove user databases from the master device. Move User Databases
  • Dump the user databases with the dump database command.
  • Rename the dumped databases on the master device with sp_renamedb.
  • Re-create the databases with their original names on another device with create database. Be sure they are created exactly like the old databases, to avoid 2558 and other errors. Refer to Error 2558 for more information.
  • Load the dumps with load database.
  • Use the online database command for each database to make the databases available for use.
  • Check the databases in their new location to make sure the load was successful (that is, perform a simple query with isql), and if everything loaded successfully, drop the old databases from the master device.
You can now try to increase the size of the master database on the master device with the alter database command. Increase Master Device Size If the master device contains only the master database and the master device is too small, then use the following procedure: Warning! Altering the master device is extremely risky! Avoid it if at all possible. Be familiar with the recovery methods in “System Database Recovery” in case you lose your master database or master device.
  • Back up the master database with the dump database command.
  • Save the contents of key system tables such as sysdatabases, sysdevices, sysusages, and syslogins.  Make a note of these values.  Also make a note of the path to the dump device in sysdevices.
  • Use the buildmaster utility to build a new master device with enough extra space so that you will never need to increase the master device again. When buildmaster completes, a new master database will exist on the new master device. The buildmaster executable is found in bin, so use ./buildmaster and follow the prompts.
  • You now need to create a new runserver file which points to this new master device (the -d option). And start up the server with this new runserver file.
  • Expand the size of the new master database with the alter database command, if necessary, so that it matches the size of the dumped master database(get this info from the original sysusages table where the size is in 2k blocks, the alter database command uses sizes in MB).
  • Execute the following command in isql:
1> select name, high from master..sysdevices 2> where name = “master” 3> go and note the “high” value for the master device. Shutdown the server.
  • Add the –m option to the runserver file to start Adaptive Server in single-user mode.
  • Allow updates to the system catalog:
1> sp_configure “allow updates”, 1 2> go
  • Change the value for srvnetname in sysservers from SYB_BACKUP to the name of your backup server.
  • Load the dump of the master database, using load database master from <full path name>.
  • Reset the “high” value in master..sysdevices:
1> begin transaction 2> go 1> update master..sysdevices 2> set high = <value of high from step 5> 3> where name = “master” 4> go
  • If the previous update affected only one row, commit the transaction.
  • Restart Adaptive Server.
  • Turn off allow updates:
1> sp_configure, “allow updates”, 0 2> go
  • Edit the new runserver file to take it out of single user mode, i.e remove the –m option and restart the server, if this all works fine (leave it for a while) then you can remove the original master device and its related run server file.

How to perform a load froma remote backup server

How to perform a load from a remote backup server A step by step guide: In this guide there are assumed to be two servers TROPHY_1103 and TRIDENT_1103. 1.                  Create a backup server for TROPHY_1103 called TROPHY_1103_BACK 2.                  Create a backup server for TRIDENT_1103 called TRIDENT_1103_BACK (For info on creating backup servers refer to reference manuals) 3.                  Log into TROPHY_1103 and execute the following two commands:                          sp_addserver SYB_BACKUP, TROPHY_1103_BACK sp_addserver TRIDENT_1103_BACK (If the SYB_BACKUP part doesn’t work do a sp_dropserver SYB_BACKUP     first) 4.                  Log into TRIDENT_1103 and execute the following two commands: sp_addserver SYB_BACKUP, TRIDENT_1103_BACK sp_addserver TROPHY_1103_BACK 5.                  Take a look at the interfaces file for TROPHY_1103 and make a note of the entry for TROPHY_1103_BACK.  This info needs to be entered into the Runserver file for TRIDENT_1103 except the line which starts master.  The entry for TRIDENT_1103_BACK in the TRIDENT_1103 Runserver file needs to be entered into the TROPHY_1103 Runserver file again taking out the master line, leaving the query line. 6.                  Next make sure that both servers TROPHY_1103 and TRIDENT_1103 are set up for remote procedure calls by checking that sp_configure “allow remote access” has a run value set to 1, if not issue the following command sp_configure “allow remote access”, 1 7.                  Test that the two servers can communicate by performing the following procedure calls,                                                                                       from TROPHY_1103; TRIDENT_1103_BACK…sp_who And from TRIDENT_1103; TROPHY_1103_BACK…sp_who 8.         The two servers are now set up to allow remote backups, this can be issued             from either server (in this example I’m  performing it from TROPHY_1103)    with the following command: Load database from “<give the full pathname to the tape or disk device used         by TRIDENT_1103 ASE>” at TRIDENT_1103_BACK.

How to drop a corrupt table

1. sp_configure “allow updates”, 1 go or reconfigure with override ( if System X) go 2. Use the database; get its dbid [select db_id()] and write it down for reference. 3. select id from sysobjects where name = <bad-table-name> go … write that down, too. 4. select indid from sysindexes where id = <table-id> go … you will need these index IDs to run dbcc extentzap. Also, remember that if the table has a clustered index you will need to run extentzap on index “0”, even though there is no sysindexes entry for that indid. 5. begin transaction go … not required, but a *real*good*idea*. 6. Type in this short script: declare @obj int select @obj = id from sysobjects where name = <bad-table-name> delete syscolumns where id = @obj delete sysindexes where id = @obj delete sysobjects where id = @obj delete sysprocedures where id in (select id from sysdepends where depid = @obj) delete sysdepends where depid = @obj delete syskeys where id = @obj delete syskeys where depid = @obj delete sysprotects where id = @obj delete sysconstraints where tableid = @obj delete sysreferences where tableid = @obj …This gets rid of all system catalog information for the object, including any object and procedure dependencies that may be present. Some of these lines may be unnecessary; you should type them in anyway just for the exercise 7. commit transaction go (unless you made a mistake in step 6, in which case rollback.) 8. Prepare to run dbcc extentzap: use master go sp_dboption <db-name>, “read”, true go use <db-name> go checkpoint go (Each of the above must be given as a separate batch — that is, type “go” after every line.) sp_role “grant”, sybase_ts_role, “sa” go set role “sybase_ts_role” on go 9. Run dbcc extentzap once for EACH index (including index 0, the data level) that you got from step 4 above: ********** The following commands are very dangerous commands use them with care because, if you give the wrong object id, all data for that object will be lost forever. You want to make sure that the object id is the id of the bad table and not one of your good objects ********** dbcc traceon(3604) go /* lets you see errors */ dbcc extentzap( <db-id>, <object-id>, <index-id>, 0) go dbcc extentzap( <db-id>, <object-id>, <index-id>, 1) go Notice that extentzap runs TWICE for each index … this is because the last parameter (the “sort” bit) might be 0 or 1 for each index, and you want to be absolutely sure you clean them all out. 10. Clean up after yourself: use master go sp_dboption <db-name>, “read”, false go sp_configure allow,0 go reconfigure ( if System X) go use <db-name> go checkpoint go

Database status values in sysdatabases

Status control bits in the sysdatabases table Decimal                       Hex                             Status 4                                  0x04                            select into/bulkcopy; can be set by user 8                                  0x08                            trunc log on chkpt; can be set by user 16                                0x10                            no chkpt on recovery; can be set by user 32                                0x20                            Database created with for load option, or crashed while loading database, instructs recovery not to proceed 256                              0x100                          Database suspect; not recovered; cannot be opened or used; can be dropped only with dbcc dbrepair 512                              0x200                          ddl in tran; can be set by user 1024                            0x400                          read only; can be set by user 2048                            0x800                          dbo use only; can be set by user 4096                            0x1000                        single user; can be set by user 8192                            0x2000                        allow nulls by default; can be set by user There is also an undocumented value which is 320, this is very similar to 256 i.e. database suspect, but it allows you to perform certain functions on the db.

Using set showplan

This section explains how to use and interpret the showplan command to better understand and utilize the SQL Server query optimizer. When you send a SQL statement to the Sybase SQL Server, the request first goes to a cost-based query optimizer whose job it is to find the most efficient data access path to fulfill the request. To do this, the optimizer examines such information as:
  • The structure of any indices defined on the table(s) involved
  • The distribution of data within the indices
  • The number of rows in the table(s) involved
  • The number of data pages used by the table(s) involved
  • The amount of data cache SQL Server is currently using
  • The access path that would require the least amount of I/O and, therefore, would be the fastest
Once an optimal access path is calculated, it is stored in a query plan within the procedure cache. The showplan allows you to view the plan the optimizer has chosen and to follow each step that the optimizer took when joining tables, reading from an index or using one of several other methods to determine cost efficiency. To invoke the showplan command, enter: 1> set showplan on 2> go This command causes SQL Server to display query plan information for every SQL statement executed within the scope of the SQL Server session. Since the determination of a query plan is performed independently from the actual data retrieval or modification, it is possible to examine and tune query plans without actually executing the SQL statement. This can be accomplished by instructing SQL Server not to execute any SQL statements via the following command: 1> set noexec on 2> go Note Issue noexec after showplan or the set showplan command will not execute. For more information about executing the showplan command, refer to the SQL Server Performance and Tuning Guide. Note The showplan command does not function within stored procedures or triggers. However, if you set it to on and then execute a stored procedure or a command that fires a trigger, you can see the procedure or trigger output. Use the following examples to analyze a query plan. In all cases, examples use the pubs database provided with each SQL Server release. Interpreting showplan Output The output of the showplan command consists of many different types of statements, depending on the details of the access path that is being used. The following sections describe some of the more common statements. STEP n This statement is added to the showplan output for every query, where n is an integer, beginning with 1. For some queries, SQL Server cannot effectively retrieve the results in a single step, and must break the query plan into several steps. For example, if a query includes a group by clause, the query needs to be broken into at least two steps: one to select the qualifying rows from the table and another to group them. The following query demonstrates a single-step query and its showplan output: 1> select au_lname, au_fname from authors 2> where city = “Oakland” 3> go STEP 1 The type of query is SELECT FROM TABLE authors Nested iteration Table Scan A multiple-step example is shown in the next section. The Type of Query Is SELECT (into a Worktable) This showplan statement indicates that SQL Server needs to insert some of the query results into an intermediate worktable and, later in the query processing, select the values from that table. This is most often seen with a query which involves a group by clause, as the results are first put into a worktable, and then the qualifying rows in the worktable are grouped based on the given column in the group by clause. The following query returns a list of cities and indicates the number of authors who live in each city. The query plan is composed of two steps: the first step selects the rows into a worktable, and the second step retrieves the grouped rows from the worktable. 1> select city, total_authors = count (*) 2> from authors group by city 3> go STEP 1 The type of query is SELECT (into a worktable) GROUP BY Vector Aggregate FROM TABLE authors Nested iteration Table Scan TO TABLE Worktable STEP 2 The type of query is SELECT FROM TABLE Worktable Nested iteration Table Scan The Type of Query Is query_type This statement describes the type of query for each step. For most user queries, the value for query_ type is select, insert, update, or delete. If showplan is turned on while other commands are issued, the query_ type reflects the command that was issued. The following two examples show output for different queries or commands: 1> create table Mytab (col1 int) 2> go STEP 1 The type of query is CREATE TABLE 1> insert publishers 2> values (“9904”, “NewPubs”, “Nome”, “AL”) 3> go STEP 1 The type of query is INSERT The update mode is direct Table Scan TO TABLE publishers The Update Mode Is Deferred There are two methods or, modes, that SQL Server can use to perform update operations such as insert, delete, update, and select into. These methods are called deferred update and direct update. When the deferred method is used, the changes are applied to all rows of the table by making log records in the transaction log to reflect the old and new value of the column(s) being modified (in the case of update operations), or the values that will be inserted or deleted (in the case of insert and delete). When all log records have been constructed, the changes are applied to the data pages. This method generates more log records than a direct update, but it has the advantage of allowing commands to execute which may cascade changes throughout a table. For example, consider a table that has a column col1 with a unique index on it and data values numbered consecutively from 1 to 100 in that column. Execute an update statement to increase the value in each row by one: 1> update Mytable set col1 = col1 + 1 2> go STEP 1 The type of query is UPDATE The update mode is deferred FROM TABLE Mytable Nested iteration Table scan TO TABLE Mytable Consider the consequences of starting at the first row in the table, and updating each row until the end of the table. This violates the unique index. First, updating the first row (which has an initial value of 1) to 2 would cause an error, since 2 already exists in the table. Second, by updating the second row or any row in the table except the last one does the same. Deferred updates avoid unique index violations. The log records are created to show the new values for each row, the existing rows are deleted and new values are inserted. In the following example, the table authors has no clustered index or unique index: 1> insert authors select * from authors 2> go STEP 1 The type of query is INSERT The update mode is deferred FROM TABLE authors Nested iteration Table Scan TO TABLE authors Because the table does not have a clustered index, new rows are added at the end of the table. The query processor distinguishes between existing rows now in the table (before the insert command) from the rows to be inserted, thus avoiding the continuous loop of selecting a row, inserting it at the end of the table, re-selecting the row just inserted and reinserting it. The deferred insertion method first creates the log records to show all currently existing values in the table. Then SQL Server rereads those log records to insert the rows into the table. The Update Mode Is Direct Whenever possible, SQL Server tries to directly apply updates to tables, since this is faster and creates fewer log records than the deferred method. Depending on the type of command, one or more criteria must be met in order for SQL Server to perform the update using the direct method. The criteria are as follows:
  • insert ­ Using the direct method, the table into which the rows are being inserted cannot be a table which is being read from in the same command. The second query example in the previous section demonstrates this, where the rows are being inserted into the same table in which they are being selected from. In addition, if rows are being inserted into the target table, and one or more of the target table’s columns appear in the where clause of the query, then the deferred method, rather than the direct method, will be used.
  • select into ­ When a table is being populated with data by means of a select into command, the direct method will always be used to insert the new rows.
  • delete ­ For the direct update method to be used for delete, the query optimizer must be able to determine that either zero or one row qualifies for the delete. The only way to verify this is to check that one unique index exists on the table, which is qualified in the where clause of the delete command, and the target table is not joined with any other table(s).
  • update ­ For the direct update method to be used for update commands, the same criteria apply as for delete: a unique index must exist so that the query optimizer can determine that no more than one row qualifies for the update, and the only table in the update command is the target table to update. Also, all updated columns must be fixed-length datatypes, not variable- length datatypes. Note that any column that allows null values is internally stored by SQL Server as a variable-length datatype column.
1> delete from authors 2> where au_id = “172-32-1176” 3> go STEP 1 The type of query is DELETE The update mode is direct FROM TABLE authors Nested iteration Using Clustered Index TO TABLE authors 1> update titles set type = ‘popular_comp’ 2> where title_id = “BU2075” 3> go STEP 1 The type of query is UPDATE The update mode is direct FROM TABLE titles Nested iteration Using Clustered Index TO TABLE titles 1> update titles set price = $5.99 2> where title_id = “BU2075” 3> go STEP 1 The type of query is UPDATE The update mode is deferred FROM TABLE titles Nested iteration Using Clustered Index TO TABLE titles Note that the only difference between the second and third example queries is the column of the table which is updated. In the second query, the direct update method is used, whereas in the third query, the deferred method is used. This difference occurs because of the datatype of the column being updated: the titles.type column is defined as “char(12) NOT NULL” where the titles.price column is defined as “money NULL”. Since the titles.price column is not a fixed-length datatype, the direct method cannot be used. GROUP BY This statement appears in the showplan output for any query that contains a group by clause. Queries that contain a group by clause are always two-step queries: the first step selects the qualifying rows into a table and groups them; the second step returns the rows from the table as seen in the following example: 1> select type, avg (advance), sum(ytd_sales) 2> from titles group by type 3> go STEP 1 The type of query is SELECT (into a worktable) GROUP BY Vector Aggregate FROM TABLE titles Nested iteration Table Scan TO TABLE Worktable STEP 2 The type of query is SELECT FROM TABLE Worktable Nested iteration Table Scan Scalar Aggregate Transact-SQL includes the aggregate functions avg, count, max, min, and sum. Whenever you use an aggregate function in a select statement that does not include a group by clause, the result is a single value, regardless of whether it operates on all table rows or on a subset of the rows defined in the where clause. When an aggregate function produces a single value, the function is called a scalar aggregate and showplan lists it that way as seen in the following example: 1> select avg(advance), sum(ytd_sales) from titles 2> where type = “business” 3> go STEP 1 The type of query is SELECT Scalar aggregate FROM TABLE titles Nested iteration Table scan STEP 2 The type of query is SELECT Table Scan showplan considers this a two-step query, which is similar to the group by output. Since the query contains a scalar aggregate which will return a single value, SQL Server keeps a “variable” internally to store the result of the aggregate function. It can be thought of as a temporary storage space to keep a running total of the aggregate function as the qualifying rows from the table are evaluated. After all rows are evaluated from the table in step 1, the final value of the variable is selected in step 2 to return the scalar aggregate result. Vector Aggregates When a group by clause is used in a query that also includes an aggregate function, the aggregate function produces a value for each group. These values are called vector aggregates. The vector aggregate statement from showplan indicates that the query includes a vector aggregate. The following example query includes a vector aggregate: 1> select title_id, avg (qty) from sales 2> group by title_id 3> go STEP 1 The type of query is SELECT (into a worktable) GROUP BY Vector Aggregate FROM TABLE sales Nested iteration Table Scan TO TABLE Worktable STEP 2 The type of query is SELECT FROM TABLE worktable Nested iteration Table Scan from table Statement This showplan output shows the table from which the query reads. In most queries, the from table is followed by the table’s name. In other cases, it may show that it is selecting from a worktable. The significant fact is that the from table output show the query optimizer’s order for joining tables. The order in which the tables are listed is the order in which the tables are joined. This order often differs from the order in which tables are listed in the query’s from or where clauses. The reason for this is that the query optimizer checks many join orders for the tables and picks the order that uses the fewest I/Os. 1> select authors.au_id, au_fname, au_lname 2> from authors, titleauthor, titles 3> where authors.au_id = titlesauthor.au_id 4> and titleauthor.title_id = titles.title_id 5> and titles.type = “psychology” 6> go STEP 1 The type of query is SELECT FROM TABLE TITLES Nested iteration Table Scan FROM TABLE TITLEAUTHOR Nested iteration Table Scan FROM TABLE authors Nested iteration Table Scan This query illustrates the join order that the query optimizer chose for the tables, which is not the order listed in either the from or where clauses. By examining the order of the from table statements, it can be seen that the qualifying rows from the titles table are first located with the search clause titles.type = “psychology”. Those rows are then joined with the titleauthor table using the join clause titleauthor.title_id = titles.title_id. Finally, the titleauthor table is joined with the authors table to retrieve the desired columns using the join clause authors.au_id = titleauthor.au_id. to table Statement When you issue a command that tries to modify one or more table rows, such as insert, delete, update, or select into, the to table statement shows the target table that is being modified. If the operation requires an intermediate step and inserts the rows into a worktable, the to table statement names the worktable instead of the user table. 1> insert sales 2> values (“8042”, “QA973”, “7/15/94”, 7, 3> “Net 30”, “PC1035”) 4> go STEP 1 The type of query is INSERT The update mode is direct TO TABLE sales 1> update publishers 2> set city = “Los Angeles” 3> where pub_id = “1389” 4> go STEP 1 The type of query is UPDATE The update mode is deferred FROM TABLE publishers Nested iteration Using Clustered Index TO TABLE publishers Note that the showplan for the second query indicates that the publishers table is used for both from table and to table. With update operations, the query optimizer must first read the table containing the row(s) to be updated, resulting in the from table statement, and then must modify the row(s), resulting in the to table statement. Worktable For some queries, such as those that require ordered or grouped output, the query optimizer creates its own temporary table called a worktable. The worktable holds all the intermediate results of the query where they are ordered and/or grouped, and then the final select is done. When all results are returned, the table is dropped automatically. The tempdb database holds all temporary tables so the System Administrator may need to increase the size of that database to accommodate very large worktables. For more information about worktables, refer to Chapter 8, “The tempdb Database.” Since the query optimizer creates these worktables for its own internal use, the worktable names are not listed in the tempdb..sysobjects table. Nested Iteration The nested iteration is the default technique used to join table and return rows from a table. It indicates that the query optimizer uses one or more sets of loops to read a table and fetch a row, qualify the row based on the search criteria given in the where clause, return the row to the front end, and then loop again for the next row. The following example shows the query optimizer doing nested iterations through each of the tables in the join: 1> select title_id, tile 2> from titles, publishers 3> where titles.pub_id = publishers.pub_id 4> and publishers.pub_id = ‘1389’ 5> go STEP 1 The type of query is SELECT FROM TABLE publishers Nested iteration Using clustered index FROM TABLE titles Nested iteration Table Scan Table Scan This showplan statement identifies the method used to fetch the physical result rows from the given table. When the table scan method is used, execution begins with the first row on the table. Then each row is fetched and compared with the conditions set in the where clause, then returned as valid data if the conditions are met. No matter how many rows qualify, every row in the table must be checked, and this causes problems if the table is large (the scan has a high I/O overhead). If a table has one or more indexes on it, the query optimizer may still choose a table scan instead of reading the index. The following query shows a typical table scan: 1> select au_lname, au_fname 2> from authors 3> go STEP 1 The type of query is SELECT FROM TABLE authors Nested iteration Table Scan

Structure of databases as regards loading

INTRODUCTION The layout of Sybase SQL and ASE Server databases cannot be easily altered.  A database, once created and altered, carries the logical layout of data and log segments for the duration of its life.   A major re-rganization of data and log segments is possible only by completely rebuilding the database using bcp or a similar method.   A limited reorganization is possible via the sp_logdevice stored procedure. A reorganization of a database’s logical layout is not achievable via dump and load.  A database created for load must be laid out in exactly the same way as the database that was dumped, in order to retain its layout of data and log segments.   In a simplest scenario, a database created for load must be created and altered in exactly the same way, and in the same order, with the same size parameters, (though not necessarily on the same database devices,) as the database that was dumped. This note describes  how to create a database and load a dump of when  the logical layout may be unknown. (It may also be used to create and  load  a database when the logical layout is known.) The issues od segments are described in much detail in a white paper entitled:  “Segment Remapping with load database When Moving a Database”, doc Id: 1324.   The document is available on the Sybase Web site www.sybase.com. BACKGROUND INFORMATION The internal layout of a database’s data and log pages is stored in the system table in the master database called sysusages. If you run the following query: select * from sysusages where order by lstart …you’ll see output similar to the following: dbid segmap lstart  size     vstart     pad unreservedpgs —- —— —— —– ———– —— ————- 1      7      0  1536           4   NULL           208 1      7   1536  1024        3588   NULL          1016 1      7   2560 10240        6660   NULL         10240 2      7      0  1024        2564   NULL           632 2      7   1024 14336    83886082   NULL         14336 3      7      0  1024        1540   NULL           632 4      3      0 46080   100663298   NULL         40936 4      4  46080 15360   117440514   NULL         15352 4      3  61440  1024   100709378   NULL           512 4      3  62464  2048   100710402   NULL          1024 Each row in sysusages is referred to as a fragment and represents a contiguous chunk of database space on a single database device. segmap 7 indicates data and log are mixed, 3 = data only, 4 = log only, any higher values indicate user defined segments, any values lower than 3 indicate that segments have been dropped using sp_dropsegment. size is expressed in 2K pages; divide it by 512 to obtain megabytes. lstart indicates the logical start of the segment. vstart points to the location on the disk.  The vstart value falls in between the low and high column values in the sysdevices table  (a join between these two tables will show the device names and paths.) Each row in the sysusages table represents one size specification with a create or alter statement.   There is an exception to this. When the server is unable to locate contiguous disk space to match a size specification, it breaks the size into multiple smaller values and adds multiple fragments to the sysusages table. These fragments are still located on the same database device. If any of the sizes end in a 1/2 megabyte,  a special situation exists. When the server creates a database device and the size requested is a page larger than the space available on the disk partition, disk init does not fail.  Instead,  a half megabyte chunk is allocated. If subsequently a database is created or altered to completely fill this database device, it will end up with a fragment that ends with a 1/2 megabyte.   This situation, a quirk in the software, should be avoided, because as we stated, a database cannot be easily re-organized. Make sure that the size specified with disk init, is available. If by chance a database device has been created with a 1/2 megabyte size, do not fill the device to the last page.  On IBM AIX, in order to accomodate the Logical Volume Control Block, specify size with the disk init command as one megabyte less than the size of the partition. DUMP AND LOAD Let’s suppose that we wish to dump database with dbid of 4 and load it into another server.   Or, we have a dump of this database, no information about its layout (the source server is unavailable,) and want to load it into another server.   How to proceed? The key to success is creating the target database with the same logical layout as the source database contained in the dump. (This is why Sybase recommends keeping (1) database create and alter scripts, and (2) bcp and hard copies of key system tables such as sysdatabases, sysdevices, sysusages and syslogins.) Keep in mind that if you have a database with 1/2 megabyte chunks in the sysusages table you will have to create the database devices in exactly the same way on the target server as they had been created on the source server.  Otherwise, if 1/2 megabyte fragments do not exist, the database device sizes, as long as they are large enough to accomodate the sizes of database fragments, do not come into play. Let’s examine the layout of database with dbid = 4 in the above example. The database consists of 4 fragments: 90 mb of data, 30Mb of log, followed by 2 and 4 mb fragments of data.   This is the permanent logical organization of the database: logical page 46079 will always be a data page and it will always be followed by the first page of the log, logical page 46080.   As we stated, it is not possible to alter this layout.   Only additions to the bottom (with appropriate ALTER DATABASE commands) are possible. SCENARIO 1:   The source sysusages table is available. If the create and alter scripts are available, edit them to update the database device information (presumably, the device names are different on the target server, but they can be the same) and run them.  Then, load the database. If the create and alter scripts are unavailable, examine the source  sysusages table.   We are interested in two columns: segmap and size.   Create and alter the database in the same order as the fragments appear in the above query (ordered by lstart) specifying the same size parameter for each fragment. Treat any rows with segmap values other than 3, 4 or 7 as data – these values, stored in the dump, will be overwritten anyway. Note that any adjacent rows with identical segmap values may be combined in the create and alter statements into fewer rows.   Note, also, that the fragments are device independent – they may be created on any database devices which have the available space.   In this way, a limited reorganization of a database is in the end possible. Once the database has been created, compare the sysusages tables of the source and target database before loading the dump. Except where fragments have been combined, the size columns of rows and the order in which they appear should match exactly. SCENARIO 2:   The source sysusages table is unavailable. When the layout of a database contained in a dump is unknown, you may need to load the dump twice. First, create the database as well as you can, estimating the sizes of the segments. Create it with as few fragments as possible. Load the database.  Next, examine the sysusages table: select segmap, size from sysusages where dbid = n order by lstart The output does not necessarily show how the source database was organized, because some source fragments may have had to be split up to fit the database as it was created on the target server.  These fragments will be shown as adjacent with identical segmap values. If the placement of database segments on database device is unsatisfactory drop the just loaded database and proceed with the steps below. Next, create and alter the database in the same order as the fragments appear in the above query (ordered by lstart) specifying the same size parameter for each fragment. Treat any rows with segmap values other than 3, 4 or 7 as data – these values, stored in the dump, will be overwritten anyway. Note that any adjacent rows with identical segmap values may be combined in the create and alter statements into fewer rows.   Note also, that the fragments are device independent – they may be created on any database devices which have the available space.   In this way, a limited reorganization of a database is in the end possible. Finally, load the database. END NOTES To assure the success of a load it is best to create the database for load the same size as the source database. (The target database may not be smaller than the source database.) If you need to enlarge a database, alter it after the load has completed. What then is possible?  Since in the dump and load scenario, database fragments are independent of the devices on which they reside, they can be moved among the devices, dispersed among larger number of devices or else, consolidated on fewer devices. Remember, the value and order of the logical start of fragments, the lstart column, is critical,  but the vstart column indicating the actual physical location of the fragment is enterily irrelevant to this undertaking.

A guide to Replication server in plain English

Introduction

This guide will hopefully cut to the chase and show how you can set up a simple replication system consisting of 1 dataserver, 2 databases (one replicating one receiving) and 2 replication servers, one for each database.

Installing Replication server

This is quite straight forward; 1)         After unloading the files from the CD you set up the $SYBASE variable and also $PATH to be $SYBASE/bin:$PATH. 2)                  Next run dsedit and add the names and network addresses of the replication server you are building, the ID server (original Repserver) if this is your 2nd or more Repserver, and also any dataservers you will be connecting to. 3)                  The menus are relatively self-explanatory, make sure that if this is the first Replication server in the system that you define it as an ID server.  Also note that the RSSD devices are on the same machine as the dataserver used for the RSSD’s, ideally each Repserver would have its own dataserver, but in this example they also go from the one dataserver.  The disk-partition goes in the same place as the Repserver files. 4)                  After having set all the option and having a complete next to each menu item, you can build the Repserver, if there are any problems you will be notified.

Setting up the first Replication server

This replication server is called phoebe_rep On this replication server we need to create a connection to the primary database which is called sundry and sits on dataserver phoebe.  This is done using rs_init, and selecting the option add database, the menu’s are self-explanatory, the only thing to look out for is the option deciding whether the database is replicated or not, for this connection select yes, as it is the primary database.  This should create OK and set up the dsi links etc and also the rep-agent. The connection to a database can be stopped by issuing the following command from the Repserver, which controls it; suspend connection to phoebe.sundry and restarted with resume connection to phoebe.sundry.

Setting up the second Replication server

The second replication server is called phoebe_2nd_rep On the creation of this second Repserver be sure to make sure the first Replication server is up and running and then point to it as the ID server, also when installed add the replicate database to it using rs_init, for this select no for the question, is this database replicated.  The replicate database in this example is called sundry 3.

Setting up Route

We need to create a route from each primary replication server to each replication server that subscribes to data from it(In this case from phoebe_Rep to phoebe_2nd_rep). To do this you will need to do the following; You will need to alter the connection from the RSSD database for the primary Repserver as follows (from the Primary Repserver), suspend connection to phoebe_rep_RSSD Then alter connection to phoebe_rep_RSSD set log transfer to on followed by resume connection to phoebe_Rep_RSSD .  The reason for this is so that the Replicate Repserver can read any changed to the primary Repserver in terms of new definitions created etc and also get all the ones which already exist. Next you have to enable a repagent for this database, so log in to the phoebe server and issue the following command; sp_config_rep_agent phoebe_rep_RSSD, enable, phoebe_rep, sa, NULL followed by sp_start_rep_agent phoebe_rep_RSSD The command to actually create the route can now be issued from the primary Repserver (phoebe_Rep), Create route to phoebe_2nd_rep Set username as sa Set password as NULL The progress of this route creation can be checked with rs_helproute run from the RSSD for the primary Repserver i.e. from phoebe_rep_RSSD. If everything is OK this will report the rote as active.

Setting up Replication

The next step is to select the table you want to replicate in the primary database (sundry) which is managed by the primary Repserver (phoebe_rep).  In this example the table is called alpha1 and contains only 2 columns, one is an int and the other a char(10).  To enable this table for replication, issue the following command from database sundry; sp_setreptable alpha1, true Incidentally the commands to start and stop the repagent for sundry (which monitors the transactions on the alpha1 table through the syslogs) is sp_start_rep_agent sundry and sp_stop_rep_agent sundry.  Also to replicate a stored procedure (which can be a much more efficient way of doing things) the command is sp_setrepproc proc1, true You also need to create a table with the same column structure in the replicate database (sundry3) which is managed by the replicate Repserver (phoebe_2nd_rep), i.e. create table alpha1 (a int, b char(10)) This is also true for replicating stored procs where an identical stored proc must already exists in the replicate database. Now we can check to see whether all the connections/routes and repagents etc are all up by issuing admin who_is_down at each of the replication servers.  If this is all clear then we are ready to To create the replication definition the command issued from the primary Repserver is; create replication definition repdef1 with primary at phoebe.sundry with all tables named ‘alpha1’ (a int, b char(10)) primary key (a) You always have to specify a primary key!! To check that the routing is working fine you can check whether the replication definition has been copied over to the replicate Replication server (phoebe_2nd_rep) by issuing the following command in phoebe_2nd_rep_RSSD on dataserver phoebe; rs_helprep Incidentally the stored proc to check on subscriptions is rs_helpsub Now we need to set up a subscription to this replication definition in the replicate Repserver as follows (command issued from phoebe_2nd_rep); create subscription sub1 for repdef1 with replicate at phoebe.sundry3 The server will now start materializing this table from the primary database to the replicate database and you can see the progress of this by issuing the following command from the Replicate Repserver; check subscription sub1 for repdef1 with replicate at phoebe.sundry3 If this comes back as valid then everything is fine You also check on subscription by issuing rs_helpsub from the replicate RSSD database.

Verifying

The replication process is now set up and to check it insert a few rows into the sundry database alpha1 table and see if they are replicated to the sundry3 alpha1 table.

DR for a Replication Server

Restoring a Primary Database ________________________________________________________________________ Phase I: Clean up the old system 1) Log on to your primary Replication Server and suspend the DSI connection to the primary database. suspend connection to srv1.pdb 2) Re-create the primary database.
  • · If your primary database is irretrievably damaged, drop it. If the database is
marked suspect, you must use the dbcc dbrepair command. Drop database will not execute on a suspect database. drop database pdb dbcc dbrepair (pdb,dropdb)
  • · Drop any damaged database devices using sp_dropdevice.
  • · Create new database devices using disk init (if necessary).
  • · Create the new database using create database with the for load option.
3) Restore the primary database.
  • · Load the database from the database dump.
load database pdb from <dump file>
  • · Restore any transaction log dumps.
load transaction pdb from <dump file>
  • · Activate the database using the online database command.
online database pdb Phase II: Re-Establish Replication If you have done up-to-the-minute recovery, your primary database should be current, and you can go directly to Step 8. Steps 4-7 are only necessary if you have not recovered all of the data in the database. 4) Zero out the LTM truncation point. Use the RSSD for your primary Replication Server, and run the following command: rs_zeroltm srv1, pdb 5) Log on to the primary Replication Server and get the generation number for the primary database. Record this number. admin get_generation, srv1, pdb 6) Use your primary database, and update the generation number. The new number should be higher than the old number. dbcc settrunc(’ltm’, ’gen_id’, <new number>) The reason for doing this is that the generation number will differ between the RSSD on the Repserver controlling this db and the actual db The reason for doing this is that if you are restoring a primary database to an earlier state,  you have to increment the database generation number so that the Replication  Server does not ignore log records submitted after the database  is reloaded. 7) Recover any lost or inconsistent data.
  • · Recover any data modification lost from the primary database.
  • · Re-synchronize the replicate database:
  • · Re-materialize the replicate database.
  • · Run rs_subcmp.
  • · Replay the transaction log from a log backup.
8) Once the primary and replicate databases are correct, you can resume replication. Log on to your primary Replication Server and resume the DSI connection to your primary database. resume connection to srv1.pdb 9) Restart the RepAgent on your primary database. 10) Review your error logs to ensure that no errors have occurred. Re-playing the Transaction Log ________________________________________________________________________ 1) Log on to your Adaptive Server. Create a new database that has the same size and segments as your primary database. This will be your temporary recovery database. 2) Shut down the Replication Server and restart it in single-user mode. To do this, you should have a run server file that contains the –M option. Once the Replication Server comes up, check the error log to ensure that the server came up in stand-alone mode. 3) Put the Replication Server into log recovery mode by executing the following commands: set log recovery for srv1.pdb allow connections 4) Configure a RepAgent for the temporary recovery database that you created in Step 1. Log into your primary data server, use the temporary recovery database, and execute the following commands: exec sp_config_rep_agent temp_rep, ‘enable’, ‘rep1’, ’sa’,’’ 5) Log on to your primary Adaptive Server and load the temporary database you created with the database dump of your primary database. load database temp_rep from ’<dump file>’ go online database temp_rep 6) Start the RepAgent for the temporary recovery database in recovery mode. Although you are starting the RepAgent on the temporary recovery database, the ‘connect database’ is your original primary database: exec sp_start_rep_agent temp_rep, recovery, ‘srv1’,‘pdb’,’rep1’ 7) Verify that the RepAgent has replayed everything that you have loaded. There are two ways to do this:
  • · Examine the error log of your primary Adaptive Server, looking for a message
that states that recovery of the transaction log is complete.
  • · Run sp_help_rep_agent on the RepAgent for the temporary recovery database:
exec sp_help_rep_agent temp_rep, ‘recovery’ The recovery status should be “not running” or “end of log”. 8) Log on to your primary Adaptive Server and load the temporary database you created with the first transaction dump of your primary database: load transaction temp_rep from ’<dump file>’ 9) Restoring the transaction dump will invalidate the LTM truncation point. Reset it by using the temporary recovery database and running the following command: dbcc settrunc(‘ltm’,’valid’) 10) Start the RepAgent for the temporary recovery database in recovery mode: exec sp_start_rep_agent temp_rep, recovery, ‘srv1’,‘pdb’,’rep1’ 11) Verify that the RepAgent has replayed everything that you have loaded. There are two ways to do this:
  • · Examine the error log of your primary Adaptive Server, looking for a message
that states that recovery of the transaction log is complete.
  • · Run sp_help_rep_agent on the RepAgent for the temporary recovery database:
exec sp_help_rep_agent temp_rep, ‘recovery’ The recovery status should be “not running” or “end of log”. 12) Repeat Steps 8, 9, 10, and 11 for each transaction log dump until you have played all transaction log dumps through the replication system. 13) Check the Replication Server error log for any loss detection messages. If you find any, then this process has failed, and you must find another way of re-synchronizing your data in the primary and replicate databases. Your other options are:
  • · Re-materializing the replicate database.
  • · Running rs_subcmp.
14) If you did not find loss detection messages in the previous step, shut down your Replication Server, and restart it in normal mode. To do this, you should have a run server file that does not contain the –M option. If you accidentally bring the server up in single-user mode, there will be a message to this effect in the error log. Shut Replication Server down, and start it up using a correct run server file. 15) Re-establish the LTM truncation point in the original primary database. Use the primary database, and run the following command: dbcc settrunc(‘ltm’,’valid’) 16) Restart the RepAgent for the original primary database in normal mode. exec sp_start_rep_agent pdb 17) Log on to your Replication Server and check for any open transactions admin who,sqt 18) Drop the temporary recovery database, and remove its database devices. Rebuilding Stable Queues ________________________________________________________________________ 1) Log on to your replication server, and drop the damaged partition. drop partition <partition name> 2) If necessary, add a new partition to replace the disk space. Remember, the file or raw partition must already exist before you execute this command.
  • · In UNIX, you can use the touch command to create a file.
  • · In Windows NT, you can create a file using a utility program such as Notepad.
add partition <partition name> on ’<physical_name>’ with size <size> 3) Check the disk space on your Replication Server. Your failed partition should contain the status DROPPED, while your new partition should be listed as ONLINE. admin disk_space 4) Rebuild the stable device: rebuild queues 5) If the connection to your replicate database is suspended, resume it. 6) Examine your Replication Server error log. Wait for the log to display the message “Rebuild Queues: Complete”. You may see the error message, “DSI: detecting loss for database ’srv1.pdb’…” If your RepAgent is not running, this message can be ignored. 7) If the RepAgent for the primary database is not running, start it up. Use your primary database and run the following: exec sp_start_rep_agent pdb 8) Examine your Replication Server error log. Check for loss detection messages. If a data loss was detected, then this process has not fully succeeded, and you have lost transactions.
  • · Compare the primary and replicate databases, and replace any lost data.
  • · Use the ignore loss command to restart replication.
ignore loss from srv1.pdb to srv1.rdb If no data loss was detected, Replication Server will end loss detection mode, and normal replication may continue. 9) Check the disk space used by your Replication Server’s stable device. admin disk_space If the damaged partition no longer appears in this display, the file or raw partition that was used by the damaged partition may be dropped. Restoring an RSSD Database ________________________________________________________________________ Phase I: Clean up the old system 1) Shut down all RepAgents that connect to the current Replication Server. 2) If the Replication Server is still running, log on to it and shut it down. 3) Re-create the RSSD database.
  • · If your RSSD database is irretrievably damaged, drop it. If the database is marked
suspect, you must use the dbcc dbrepair command. Drop database will not execute on a suspect database. drop database rep1_RSSD dbcc dbrepair (rep1_RSSD,dropdb)
  • · Drop any damaged database devices using sp_dropdevice.
  • · Create new database devices using disk init (if necessary).
  • · Create the new RSSD database using create database with the for load option.
4) Restore the RSSD database.
  • · Load the database from the database dump.
load database rep1_RSSD from ‘<dump device>’
  • · Load the database from any transaction log dumps. Be sure to apply the
transaction log dumps in order. load tran rep1_RSSD from ‘<dump device>’
  • · Activate the database using the online database command.
online database rep1_RSSD 5) If your Replication Server’s RSSD had a RepAgent and you were able to recover the database up-to-the-minute, re-establish its truncation point and restart your RepAgents. Use the RSSD database and run the following command: dbcc settrunc(’ltm’, ’valid’) 6) If your Replication Server’s RSSD had a RepAgent and there was an active route from this replication server to another one, but you were unable to recover up-to-the-minute, get the generation number. admin get_generation, srv1, rep1_RSSD Phase II: Rebuild the RSSD If you have done up-to-the-minute recovery, your RSSD database should be current. Restart your Replication Server and RepAgents. Steps 7-12 are only necessary if you have not been able to recover the RSSD database to its current state. 7) Shut down the Replication Server and restart it in single-user mode. To do this, you should have a run server file that contains the –M option. Once the Replication Server comes up, check the error log to ensure that the server came up in stand-alone mode. 8) Rebuild the stable device: rebuild queues 9) Start up the RepAgents for your primary databases in recovery mode. To do this, do the following: sp_start_rep_agent pdb1, recovery 10) Examine your Replication Server error log, and check for loss detection messages. If a data loss was detected:
  • · Compare the primary and replicate databases, and replace any lost data.
  • · Use the ignore loss command to restart replication.
ignore loss from srv1.pdb to srv1.rdb If no data loss was detected, Replication Server will end loss detection mode, and normal replication may continue. 11) If this Replication Server is a primary Replication Server, with a route to a downstream Replication Server, then you must clean up the truncation point in the RSSD. Remember that a Primary Replication Server’s RSSD is a primary database, replicating schema information to other RSSDs. If your Replication Server is the only one in the domain, or if it is a replicate Replication Server, you do not need to run this last step. To clean up the truncation point, do the following:
  • · Clear the RSSD’s LTM truncation point:
dbcc settrunc(’ltm’, ’ignore’)
  • · Move the transaction log forward. Execute the following batch multiple times (40
times, for example): begin tran checkpoint commit If the number of log records does not increase, create a table in the database and drop it, and then try the above batch again.
  • · Reset the RSSD’s LTM truncation point:
dbcc settrunc(’ltm’, ’valid’)
  • · Increment the generation id number in the RSSD:
dbcc settrunc(’ltm’, ’gen_id’, <new number>) 12) Shut down your Replication Server and your RepAgents, and restart them in normal mode.
  • · To start your Replication Server in normal mode, you should have a run server file
that does not contain the –M option. If you accidentally bring the server up in single-user mode, there will be a message to this effect in the error log. Shut Replication Server down, and start it up using a correct run server file.
  • · To start a RepAgent in normal mode, run the following command:
sp_start_rep_agent pdb Avoiding Disaster ________________________________________________________________________ To avoid disasters on your RSSD database: 1) Put your RSSD on database devices that are not being used by any other databases. If you can, put them on separate disk drives. 2) Separate your RSSD data and log portions on to separate database devices and, if possible, separate disk drives. 3) Mirror your RSSD database devices. 4) Keep current backups of your RSSD database. 5) Perform transaction backups of your RSSD database to permit up-to-the-minute recovery. To avoid disasters on your stable device: 6) Mirror your stable device. Note: Sybase does not provide a mechanism to mirror your stable device. Your disk drives or operating system must do this. 7) Always have more disk space in your stable device than is strictly necessary. If replication fails, having more disk space gives you time to fix the problem before the stable device fills up. To avoid disasters on your application databases: 8) Keep current backups of your primary database. 9) If your replicate database is significantly different than your primary database, and would be difficult to rebuild, keep a current backup of it, also.

Miscellaneous suggestions

10) Document your system, so that you know the names and locations of all important objects. 11) Create rs_subcmp parameter files for every subscription you are replicating, so that you don’t need to create them during a crisis. 12) On a regular basis, run rs_subcmp against your primary and replicate tables to ensure that they truly are synchronized.