To check the size of a block device in MB:
blockdev –getsize64 bdev1 | awk ‘{x=$1/1048576; print x; print “MB”}’
To check if a Block device is in use:
fuser -vam /dev/bdev1
There are actually 3 tables which will need to be copied over and synced up, they are syslogins, syssrvroles and sysloginroles.
The below steps will vary a bit but for example if you are upgrading from Sybase version 12 to 15 then you would do the following steps, for other ASE versions you might need to change the temp table a bit:
Source Server
The first step is to bcp out these tables (syslogins, syssrvroles and sysloginroles) from the source Sybase server and copy the files over to the destination.
e.g bcp master..syslogins out syslogins.out -U<Username> -S<Servername> -n -X
From destination server scp the files across e.g.
scp zkc3yit@gbzzyxsyordad02.gbcaydc.baml.com:/tmp/t/* /tmp/t
.
.
.
Destination Server
Syslogins
- Create the temporary syslogins table on the new server
sp_configure ‘allow updates to system tables’,1
go
USE tempdb
go
/*drop table dbo.temp_syslogins
go
*/
CREATE TABLE dbo.temp_syslogins
(
suid int NOT NULL,
status smallint NOT NULL,
accdate datetime NOT NULL,
totcpu int NOT NULL,
totio int NOT NULL,
spacelimit int NOT NULL,
timelimit int NOT NULL,
resultlimit int NOT NULL,
dbname sysname NULL,
name sysname NOT NULL,
password varbinary(130) NULL,
language varchar(30) NULL,
pwdate datetime NULL,
audflags int NULL,
fullname varchar(30) NULL,
srvname varchar(30) NULL,
logincount smallint NULL,
procid int NULL,
lastlogindate datetime NULL,
crdate datetime NULL,
locksuid int NULL,
lockreason int NULL,
lockdate datetime NULL,
lpid int NULL,
crsuid int NULL
)
LOCK ALLPAGES
Go
You may need to tweak the column types depending on which version of Sybase you are importing from, check their source syslogins table to double check.
- Now bcp in the logins which was bcp’d out on the old Sybase server
bcp tempdb..temp_syslogins in syslogins.out -Usa -P -SMICOY2 -n -X
- Make a note of the highest suid currently in master..syslogins
- Check to see which logins already exist in master..syslogins and identify any clashes
select s.suid, t.* from tempdb..temp_syslogins t, master..syslogins s
where t.name =s.name
go
Make a careful note where there is a discrepancy in suid’s. If any entry in source has a higher suid that the current highest suid in master..syslogins (e.g. if the highest current suid is 7 and ipgbmsed2 is 7 in master..syslogins but 8 in temp-syslogins) then you need to make a note to remove any special privileges away from suid 8 in temp_sysloginroles later on, if it has any. If the suid is within the current highest suid’s then it will be removed automatically in the delete before the insert, so just ignore.
also run
select s.name, t.name, t.*
from tempdb..temp_syslogins t, master..syslogins s
where t.suid =s.suid
and t.name not in (select name from master..syslogins)
go
Here if you have a situation where an SUID in source clashes with a new one on the destination (i.e they have different names and the source name does not exist in destination) then you will need to increment the suid number in the temp syslogins and also temp sysloginroles table later on.
- From the second part above Increase the suid of any source login which clashed with destination
update temp_syslogins
set suid=3006
where suid=6
go
- Delete existing logins which match by name
delete from tempdb..temp_syslogins
where name in (
select t.name from tempdb..temp_syslogins t, master..syslogins s
where t.name =s.name
)
go
- Now copy the syslogins over to the master..syslogins table
insert into master..syslogins
select * from tempdb..temp_syslogins
go
.
.
Syssrvroles
- Create the temp_syssrvroles table
USE tempdb
go
CREATE TABLE tempdb..temp_syssrvroles
(
srid int NOT NULL,
name varchar(30) NOT NULL,
password varbinary(128) NULL,
pwdate datetime NULL,
status smallint NULL,
logincount smallint NULL,
locksuid int NULL,
lockreason int NULL,
lockdate datetime NULL
)
go
- bcp in the entries from source
bcp tempdb..temp_syssrvroles in c1p16_sysroles.out -Usa -P -SMICOY2 -n -X
- Check to see if there are any clashing srid’s in the source compared to destination, it should be ok from ASE15.7 to 16
Select s.srid as master_srid, s.name as master_name,
t.srid as source_srid, t.name as source_name, t.*
from tempdb..temp_syssrvroles t, master..syssrvroles s
where t.srid=s.srid
go
IF there are any clashes from above, which from 15.7 to 16 there should not be, then Update any srid’s which conflict with existing srid’s in master..sysrvroles e.g.
update tempdb..temp_syssrvroles
set srid= 133 where srid=33
go
- Check to see if there are clashing role names in the source to destination
Select s.srid as master_srid, s.name as master_name,
t.srid as source_srid, t.name as source_name, t.*
from tempdb..temp_syssrvroles t, master..syssrvroles s
where t.name =s.name
go
If there are any clashes from the above query then we delete the srid from the temp table, we will also later need to update sysloginroles to use the srid which already exists in master.
delete from temp_syssrvroles
where srid=34
go
- Delete the srvroles which already exist in master from the temp table.
delete from tempdb..temp_syssrvroles
where srid <=33 –Biggest srid currently in master..syssrvroles is 33
go
- Copy the roles into master.syssrvroles
insert into master..syssrvroles
select * from tempdb..temp_syssrvroles
go
.
.
Sysloginroles
- Create the sysloginroles table
USE tempdb
go
CREATE TABLE dbo.temp_sysloginroles
(
suid int NOT NULL,
srid int NOT NULL,
status smallint NOT NULL,
predid int NULL
)
Go
- BCP in the entries from source
bcp tempdb..temp_sysloginroles in c1p16_sysloginroles.out -Usa -P -SMICOY2 -n -X
- Update the same suid as you did for syslogins
update tempdb..temp_sysloginroles
set suid = 3006
where suid=6
go
- Delete any suid’s from temp_sysloginroles which already existed at the start in master..syslogins and master..sysloginroles
delete from tempdb..temp_sysloginroles
where suid <=8 –highest suid recorded earlier
go
- Delete any suid’s from temp_sysloginroles which were previously a mismatch between master..syslogins and temp_syslogins
delete from tempdb..temp_sysloginroles
where suid in (10)
go
- Update any srid number which you had to update previously in the syssrvroles step e.g.
update tempdb..temp_sysloginroles
set srid= 133
where srid=33 — Updated srid from tempdb..syssrvroles
go
- If there were any incidents of the same role name having different srid’s in syssrvroles (the second check) then you need to update the srid:
update temp_sysloginroles
set srid=33
where srid=34
go
- Delete any rows from temp_sysloginroles which doesn’t have a matching suid in master..syslogins
delete from tempdb..temp_sysloginroles
where suid not in (select suid from master..syslogins)
- Make sure everything looks ok
select * from tempdb..temp_sysloginroles
go
- Now Copy the roles into master.sysloginroles
insert into master..sysloginroles
select * from tempdb..temp_sysloginroles
go
- Remove Update to system tables
sp_configure “allow updates to system tables”, 0
go
.
.
.
Synchronize Logins
The next steps relate to synchronizing the suids after you have loaded the old database into the new server.
- First thing is to enable updates to system catalogs
sp_configure ‘allow updates to system tables’,1
go
- Then if you had to alter/increment any suid’s in syslogins earlier then you will also need to do this in the sysusers for any database that has suid of e.g 6:
update sysusers set suid=3006
where suid=6
go
- Next check whether there are any aliases set up without matching entries in syslogins:
select * from sysalternates
where suid not in (select suid from master..syslogins);
go
If you find any entries which do not match with anything in syslogins then they should be deleted from sysalternates:
delete from sysalternates
where suid not in (select suid from master..syslogins)
go
- The next check is to find situations where the same name in syslogins and sysusers has a different suid.
select l.suid ‘Login suid’,u.suid ‘User suid’,l.name ‘Login name’, u.name ‘User name’
from
master..syslogins l, sysusers u
where l.name = u.name and l.suid != u.suid and u.name != “dbo”
go
If this finds any mismatches then the following query will resync the suids in the user database with the suid’s in syslogins.
update sysusers
set u.suid = l.suid
from sysusers u, master..syslogins l
where l.name=u.name
go
- A follow on check is to see whether there are any users who have a different name to their syslogins name but with the same suid’s.
select l.suid,u.suid,l.name, u.name from master..syslogins l, sysusers u
where l.suid = u.suid and l.name != u.name and u.name != “dbo”
go
if that returns some rows then they need to be deleted from sysusers with the following query:
delete from sysusers
where suid in (select suid from master..syslogins)
and name not in (select name from master..syslogins)
and name != “dbo”
go
- You should also check for any users in sysusers which do not match anything in syslogins
select * from sysusers u
where u.suid not in (select suid from master..syslogins)
and name != “dbo”
and suid > 0
go
These can be removed with
delete from sysusers
where suid not in (select suid from master..syslogins)
and name != “dbo”
and suid > 0
go
- Disable updates to system tables
sp_configure ‘allow updates to system tables’,0
go
To restore a failed master database, perform the following steps:
- Run dataserver with the –b and –d options to create a new master device:
dataserver -d /dev/rsd1b –b 100M - Start SAP ASE in single-user mode:
startserver -f RUN_SYBASE –m - You can use the master database, but presently it doesn’t have any references
to any user databases or sybsystemprocs.
If master was expanded beyond its default size, run alter database to extend it. - Update the backup server name in sysservers if it isn’t SYB_BACKUP:
begin transaction
update sysservers set srvnetname = “PROD_BACKUP”
where servname = “SYB_BACKUP”
commit transaction - Load the backup of the master database; after the load completes, it will shut
down SAP ASE automatically:
load database master from “/full/path/name/master.dump” - Start SAP ASE in single-user mode, and check that all databases/devices/logins
appear to be restored correctly. - If everything appears okay, you’re nearly done. Shut down and restart SAP ASE
normally, and skip to step 9. - If everything doesn’t appear okay, you have more work to do. If additional
devices, databases, logins, or users of master were created after the last backup
of master, you’ll need to re-create them.
SAP ASE provides two commands to recover existing devices and databases to
a restored master database:
– disk reinit
– disk refit
Use disk reinit to recover devices created since the last dump of master; disk
reinit restores information to sysdevices without reinitializing the device,
which retains the data which is on the device. Values supplied to disk reinit
should match values supplied to the original disk init command.
The syntax for disk reinit follows:
disk reinit name = logical_device_name,
physname = physical_device_name,
vdevno = virtual_device_number,
size = number_of_pages | K | G | T
disk refit is used after running disk reinit to rebuild the sysusages and sysdatabases tables in master from the database fragments found on the restored devices.
- Assuming you need to run it, after executing disk refit, SAP ASE will automatically
shut down the server. You should restart SAP ASE in single-user mode
and verify that all databases are properly restored and are the correct sizes. Run
dbcc checkalloc() on all recovered databases. If everything appears okay, shut
down and restart SAP ASE normally. Any changes or additions to logins, configuration
options, remote servers, remote logins, and roles will still need to be
re-created.
The moral of the story is to keep frequent backups of the master database! Best
practice is to schedule a nightly backup of the master database. In addition, back
up the master database after each command that affects disks, storage, databases,
segments, logins, roles, and so on.
Since ASE15.7 SP100 you don’t need to worry about creating a database in the same sequence of data and log segments when loading a dump from another database, this is something which you had to do previously.
Now the only consideration is that you have enough space for data device and log device. So for example if you are loading from a database which was created in this sequence; Data 5, Log 5, Log 5, Data 15, Log 5 and finally Data 10. then all you need to do now is;
Create database dbload with data on datadevice=”30M” and log on logdevice=”15M”
and then load the database, the load then sorts the fragments into the correct order automatically.
Sybase has a relatively new backup called cumulative which backs up all the changes in the database since the last Full Backup/Dump and can be used in between Full Backups and Tran backups to give greater flexibility, especially on large databases. It is similar to the Differential Backup in SQL Server
To enable this backup for a database you must first enable the “allow incremental dumps” option in the database: e.g sp_dboption <dbname>, ‘allow incremental dumps’, true
You can then issue the cumulative backup command as long as there is an existing valid Full Dump backup:
dump database <dbname> cumulative to “/tmp/tore1-cum.dmp”
go
To load the cumulative dump the syntax is:
load database <dbname> cumulative from “/tmp/tore1-cum.dmp”
go
Backup / Restore order
In terms of using the cumulative backup you could for example take a weekly Full backup, then daily cumulative backups and hourly transaction log backups.
So in a recovery situation you would first recover the Full Backup, then the latest Cumulative Backup and finally the transaction log dumps up to the required point in time.
Pre-Steps
The Repserver interfaces file will need entries for both Primary and Replicate Dataservers and the Primary Dataserver’s interface file will need an entry for the Repserver.
- Lets first set some things set up and make a note of various user accounts and passwords.
Make a note of the various logins passwords you need; Dataserver sa login, Dataserver maint login (which needs to be created on both dataservers), Replication server sa login and Replication server Repagent login.
Unless you have an account with the same name/password on all three servers (RepServer and two dataservers), e.g sa, you will also need to create a RepAdmin Login on the RepServer and two Dataservers giving it sa_role and replication_role on the Dataservers and sa role on the RepServer. This login MUST be used when defining the subscription later on.
If it already exists drop the maint user from the primary and instead alias the maint user to dbo, e.g sp_addalias Database_maint, dbo
- Create the connections to the source(pubs2) and replicant(pubs2_rep) databases using rs_init, marking pubs2 as being replicated.
- After the connections are created you need to disable the firing of any triggers on the replicate side with:
Alter connection to <replicate server>.<replicate db>
Set ‘dsi_keep_triggers’ to ‘off’
go
- Mark the whole Primary/Source database (e.g pubs2) for replication with
Sp_reptostandby pubs2, ‘all’
go
You can also just mark individual table(s) for replication with;
use pubs
go
sp_setreptable table_name, true;
- If you are setting the whole database to replicate (sp_reptostandby) then you also need to enable warm standby transactions which enables/allows replication of DDL statements.
sp_config_rep_agent pubs2,
send_warm_standby_xacts,true
go
- Make sure the repagent is up and running as that is needed for the dump marker to trigger below. This can be checked in the Primary Dataserver with sp_help_rep_agent and also on the RepServer with admin who. Another thing to check is for a valid secondary truncation point in the primary database. You can check this with dbcc gettrunc and look for the secondary trunc state, it should be 1 for valid. If its not then stop the rep agent, then run dbcc settrunc (‘ltm’, ‘valid’), then in the RSSD run rs_zeroltm <primary server>, <primary db> and then back in the primary db start up the repagent and again check for the secondary truncation point.
- Assuming the above is fine we now Create the Database replication definition using RepAdmin login:
create database replication definition pubs2_rdf
with primary at <Primary server>.<Primary DB>
replicate ddl
go
- Define subscription to database rep def using RepAdmin login:
define subscription pubs2_sub
for database replication definition pubs2_rdf
with primary at <Primary server>.<Primary DB>
with replicate at <Replicate server>.<Replicate DB>
use dump marker
go
One very important thing to note with create or define subscription is that the Login you use to run this, either interactively or through a script, must exist on the primary (and secondary) dataserver and the RepServer with the same name and password.
You can check the creation of the subscription in the RepServer errorlog, there should be a line saying “Placing marker in <Primary server>.<Primary db> log”
- Dump the Primary Database
Once the dump is complete you should check the RepServer errorlog again, as soon as the dump marker gets processed there should be an entry in the log like:
“Distributor for ‘<Primary server>.<Primary DB>’ received and processed enable marker. Distributing commands to replicates.
Distributor for ‘<Primary server>.<Primary DB>’ received and processed dump marker.
The DSI thread for database ‘<Replicate server>.<Replicate DB>’ is shutdown.”
- Now Load the dump into the Replicate DB and Online the database
- You now add the maint user as an alias to dbo (or equivalent permissions) for the replicate database, unless it has exactly the same suid on both servers in which case it will already be aliased to dbo but check this.
- Now remove the secondary truncation point from the replicate database with
dbcc settrunc (‘ltm’, ‘ignore’)
- Resume connection to the replicant database dsi
Resume connection to <Replicate server>.<Replicate DB>
- That should be it, you should now have database replication set up between pubs2 and pubs2_rep, which you can test by creating a new table and entering some data in pubs 2 and checking it gets created and sent across to pubs2_rep.
BTW to allow replication of DDL statements you might need to enable ‘ddl in tran’ in the replicate database but I think this is already covered in the “Alter connection to <replicate server>.<replicate db>” Step but in case of any errors/issues just be aware of it.
Use master
go
sp_dboption pubs2_rep, ‘ddl in tran’, true
go
– First you need to create the backup/log shipping directory at the primary and you need to share this folder and add full permissions for the primary dataserver SQL Server AND SQL Agent service accounts and read permissions for the secondary dataserver’s SQL Server AND SQL Agent service accounts.
– You need to repeat this at the destination server giving read access to the primary server sql server service account and read/write to the secondary dataserver’s SQL Agent AND SQL Server service accounts.
– You’ll next need to create the secondary database, this obviously needs to be sized the same as the primary database and make it simple recovery mode.
– You can now fire up the log shipping wizard from the primary dataserver by ticking the log shipping radio button under properties for the database.
NOTE: The wizard is a bit rubbish and should not be used to set up log shipping, but it is useful for generating the various stored procs which need to be run on the primary and secondary dataservers.
– The wizard is fairly self explanatory, You select the Primary dataserver log shipping directory as the backup folder (use network share path). Also and VERY IMPORTANT make sure that you select “No, the secondary database is initialised” in the Initialize Secondary database tab. Also select the secondary logshipping directory(configured in step 2 above) as the “destination folder for copied files” in the Copy Files tab. The final step is to select Standby mode and “disconnect users” from the Restore transaction log tab.
– Click the button “Script Configuration” to output all this information to a new query window and save it, you can then close the wizard.
– First make sure the primary database is set to Full logged mode in properties then back it up into the log shipping directory created earlier.
– Copy this dump across to the secondary server log shipping directory.
– Now from the secondary server load the database previously created from this backup. Make sure you leave this database in RESTORE WITH STANDBY mode! Place the standby file in the secondary server logshipping directory.
– On the primary server run the following commands from the script you generated earlier; sp_add_log_shipping_primary_database to add a primary database. sp_add_schedule to add a schedule for the backup job, sp_attach_schedule and sp_update_job.
– (Optional) On the monitor server, execute sp_add_log_shipping_alert_job to add the alert job.
– On the secondary server, execute the jobs from the script, sp_add_log_shipping_secondary_primary, sp_add_schedule and sp_attach_schedule for the copy and restore jobs, and finally the sp_add_log_shipping_secondary_database followed by 2 sp_update_job procs for copy and restore
– On the primary server, execute sp_add_log_shipping_alert_job and sp_add_log_shipping_primary_secondary to add the required information about the new secondary database to the primary server and get things up and running.
2 Case Studies on the implementation of PRINCE2 2009:
Suffolk County Council
British Council
All case studies
These are only for private use, otherwise you need to apply for a PSI licence from the following link:
www.opsi.gov.uk/click-use/psi-licence-information/index.htm.
Prince2 2009 Templates
MSSQL has many excellent tools which can be used for discovering performance bottlenecks.
Sp_monitor and dbcc perfmon
To get a snapshot of what the SQL Server is doing then these commands are very usefull.
Run sp_monitor twice with a gap of 30 seconds in between to get a high level overview of how busy the sql server is.
dbcc perfmon will give you much more detailed information on the state of the sql server but the command has been supervised by various SQL Server counters which can be used in the System monitor.
SSMS Reports
Sql Server has a set of excellent reports which are generated from the Dynamic Management Views, these are both at the server and database level. If you right click on the server or an individual database in the SQL Server Management Studio then you can select which report to run. You can even export the report output to a pdf. This will highlight any problems which you can then focus on.
System monitor
To drill down further the first port of call should be system monitor or perfmon, which is started by typing perfmon into the run box.
One excellent feature of this is that you can export to a Counter Log at the same time as running a SQL Profiler session, (discussed later), you can then merge them which will let you see how bad queries affect the CPU, memory etc or vice versa.
The objects to normally check for in system monitor are:
Memory, Network Interface, Physical Disk, Processor, SQLServer:Buffermanager, SQLServer:Databases, SQLServer:Exec Statistics, SQLServer:General Statistics, SQLServer:Latches, SQLServer:Locks, SQLServer:Memory Manager, SQLServer:PlanCache, SQLServer:SQL Statistics and System.
SQL Profiler
The system monitor might highlight the bad performance but you will typically need to drill down a bit further to find the bad queries which are causing the problems. To do this you use SQL Profiler.
First step is to establish a performance baseline as follows;
– Start Profiler, Select file, New trace and connect to the relevant instance.
– Specify a trace name, template(standard) and a location to save the trace file too. Set the maximum fiel size to 50MB and enable file rollover.
– In the events selection tab remove Audit Login, Audit Logout, ExistingConnection and SQL:BatchStarting radio boxes.
– Click on show all columns and select the TextData, NTUserName, LoginName, CPU, Reads, Writes, Duration, SPID, StartTime, EndTime, BinaryData, DatabaseName, ServerName and ObjectName coulumns.
– Click on Column Filters and select the Database name to be the database you are interested in.
– Click Run to start the trace and simultaneously start the System Monitor Counter Log.
Correlating the counter Log with Profiler Trace
– Start Profiler, Select File, Open and select the trace file you created from the Performance baseline previously.
– Next go to File, select Import Performance Data and select the Counter Log file you created simultaneously with the performance baseline trace.
– In the performance Counters Limit Dialog window select Network Interface: Output Queue Length, Processor:% processor Time, System:Processor Queue Length and SQLServer:Buffer Manager:Page life expectancy. Click OK.
– You can then scroll through the trace and find what was happening at peak activity on the Windows machine and also what effect certain commands had on the overall server performance.