Sybase – Backup Auditing Query

— Please run the query and redirect output to

select ‘DUMPDB_BACKUP_STATUS’ as DUMPDB_BACKUP_STATUS,’,’,@@servername AS SERVERNAME,’,’,b.dbName,’,’,b.processName,’,’,b.startTime,’,’,b.endTime,’,’,b.duration,’,’,b.size,’,’,b.status
from dba.dbo.backup_processes b, master.dbo.sysdatabases d
where d.name = b.dbName
and b.processName = ‘DUMPDB’
group by b.processName,b.dbName
having b.startTime between dateadd(day,-15,convert(date, max(b.startTime), 23)) and dateadd(day,-1,convert(date, max(b.startTime), 23))
order by b.processName,b.dbName,b.startTime
select ‘DUMPDB_BACKUP_STATUS’ as DUMPDB_BACKUP_STATUS,’,’,@@servername AS SERVERNAME,’,’,name,’,’,’NO_DUMPDB’,’,’,null,’,’,null,’,’,null,’,’,null,’,’,null
from master.dbo.sysdatabases
where name not in (select distinct dbName from dba.dbo.backup_processes where processName = ‘DUMPDB’)

select ‘ADSMINC_BACKUP_STATUS’ as ADSMINC_BACKUP_STATUS,’,’,@@servername AS SERVERNAME,’,’,b.dbName,’,’,b.processName,’,’,b.startTime,’,’,b.endTime,’,’,b.duration,’,’,b.size,’,’,b.status
from dba.dbo.backup_processes b, master.dbo.sysdatabases d
where d.name = b.dbName
and b.processName = ‘ADSMINC’
group by b.processName,b.dbName
having b.startTime between dateadd(day,-15,convert(date, max(b.startTime), 23)) and dateadd(day,-1,convert(date, max(b.startTime), 23))
order by b.processName,b.dbName,b.startTime
select ‘ADSMINC_BACKUP_STATUS’ as ADSMINC_BACKUP_STATUS,’,’,@@servername AS SERVERNAME,’,’,name,’,’,’NO_ADSMINC’,’,’,null,’,’,null,’,’,null,’,’,null,’,’,null
from master.dbo.sysdatabases
where name not in (select distinct dbName from dba.dbo.backup_processes where processName = ‘ADSMINC’)

Sybase_generate_revoke_Scripts

–Revoke script
set nocount on
go
set proc_return_status off
go
select ‘use ‘ + CHAR(10) + “GO”
go
select ‘revoke insert on ‘ + name + ‘ from ‘ + CHAR(10) + “GO” from sysobjects where type in ( ‘U’, ‘V’)
go
select ‘revoke update on ‘ + name + ‘ from ‘ + CHAR(10) + “GO” from sysobjects where type in ( ‘U’, ‘V’)
go
select ‘revoke delete on ‘ + name + ‘ from ‘ + CHAR(10) + “GO” from sysobjects where type in ( ‘U’, ‘V’)
go
select ‘revoke execute on ‘ + name + ‘ from ‘ + CHAR(10) + “GO” from sysobjects where type in ( ‘P’)
go
select ‘revoke Delete Statistics on ‘ + name + ‘ from ‘ + CHAR(10) + “GO” from sysobjects where type in ( ‘U’, ‘V’)
go
select ‘revoke Transfer Table on ‘ + name + ‘ from ‘ + CHAR(10) + “GO” from sysobjects where type in ( ‘U’, ‘V’)
go
select ‘revoke Truncate Table on ‘ + name + ‘ from ‘ + CHAR(10) + “GO” from sysobjects where type in ( ‘U’, ‘V’)
go
select ‘revoke Update Statistics on ‘ + name + ‘ from ‘ + CHAR(10) + “GO” from sysobjects where type in ( ‘U’, ‘V’)
go

${SYBASE}/${SYBASE_OCS}/bin/isql -b -D dbname -S IDKTPAP1 -U dba_oper -i /tmp/t/revoke_script.sql -o revoke_script_dbname.sql

-P *

–Databases – RO Groups
KA_Indo – , ReadOnly
KGlobal – , ReadOnly
KK_Indo – , ReadOnly
KV_Indo – , ReadOnly
K_Indo – , ReadOnly

dbktp_feedloader – , kplustp_ro_group
interf – , ReadOnly
ktparch – , kplustp_ro_group
ktpplus – , kplustp_ro_group

— MDA – is this needed – dba_ro_group (gudanav_ro, mhapnih_ro, miroand_ro, stoimar_ro) doesn’t seem to have any permissions, what is the point of it.

set nocount on
go
set proc_return_status off
go
select ‘revoke insert on ‘ + name + ‘ to dba_ro_group’ from sysobjects where type in ( ‘U’, ‘V’)
go
select ‘revoke update on ‘ + name + ‘ to dba_ro_group’ from sysobjects where type in ( ‘U’, ‘V’)
go
select ‘revoke delete on ‘ + name + ‘ to dba_ro_group’ from sysobjects where type in ( ‘U’, ‘V’)
go

ddlgen-extraction

Use ddlgen to extract information from a database, e.g tables, views etc

An example to extract ddl’s for all tables is:

ddlgen -Udba_oper -STKDSMOFPRD1 -Dmoftp -TU -N% -O/tmp/t/ddlgen_moftp_tables

Sybase Interview Questions

ASE Dataserver:

Q) What are the new features of ASE15?

A) ASE15 has a brand new optimizer which gives better performance and is self-tuning. It also introduced smart table/index partitions and on-disk encryption.  It has Advanced System Metrics(Query Metrics), auto updating stats and advanced XML technologies. In addition there is now row level locking on most system catalogs. Showplan has been enhanced to make it easier to read the various steps. ASE15 requires more memory to run because of new in-memory sorting and grouping algorithms and also more disk space because of some new stored procs and changes in the structures of some of the system catalogs.

ASE15.5 Introduced in-memory databases and Relaxed Durability databases and also improvements in backup compression.


Q) What are the major features introduced in ASE 15.7?

A) The major new feature is Threaded kernel mode, it also introduced improvements to compression, specifically in-database compression and also the ability to shrink a transaction log plus the ability to recreate non-materialised non-null columns, i.e you don’t need to rewrite the entire table anymore which required lots of spare space in the database. Also new security and diagnostic features plus performance enhancements with regards to parallelism.


Q) New features in ASE16?

A) Partition locking, Full Database Encryption, Index Compression, Configuration History Tracking, Query Plan Optimisation with Star Joins and Dynamic Thread Assignment. Lock Management Enhancements.


Q) New features in ASE16 SP02

A) Compiled Queries, Transactional Memory, Lockless Data Cache, Latchfree Btree, Non Volatile Cache Management(Using SSD) which all lead to Latency and Throughput benefits.


Q) New features in ASE16 SP03

A) In-memory row store (IMRS) with data row caching (DRC), New indexing schemes like Hash Cached B-tree (HCB) indexing, Multi-version concurrency control (MVCC). All of these new features lead potentialy to a 40x better performance compared to older versions of ASE! Also a new tool – Workload Profiler – enables ASE to automatically recommend configuration changes to move from 100% page-based store to having hot rows in memory that leads to significant improvement in performance


Q) What are the main advantages and disadvantages of using identity columns?

A) The advantage is in the ordering of the table and creation of unique rows, disadvantage is identity gaps.


Q) Can you update an identity column?

A) no – can define the starting number only, and insert


Q) How to bcp out a subset of rows from a large table?

A) Create a view with the subset and bcp out via it.


Q) What does set forceplan on do?

A) Forces the join order for queries to be the order in which the tables appear in the from clause.


Q) How do you find out which process has filled up tempdb?

A) select * from master..syslogshold but you will need to create an additional temporary database and bind it to your login first, if you don’t already have one.


Q) What would you do to improve the performance of tempdb following an install?

A) tempdb is initially installed as 2MB on the master device, so you should extend tempdb onto a new device, or devices, and then update the segmap for the tempdb fragment on the master device to 0.


Q) What are the options for doing an upgrade to a Sybase server?

A) You can do a Sybase upgrade but its safer to create a new server and migrate over the databases/logins etc.


Q) What is the difference between an MRU and an LRU caching strategy?

A) It stands for most recently used and least recently used and defines which records are swapped out first, the default is LRU i.e. the pages least recently used get flushed out when more space is needed. MRU is referred to as fetch and discard and is ideal for queries where a page is only needed once by the query for example table scans on heap tables and also situations where you have more than one table in a join query.


Q) What does the term covered query mean?

A) It is if the columns specified in a query (columns to be returned plus columns in where clauses) are completely satisfied by an index, that index is then called a covered index. It means that the query engine only needs to look up the table once which leads to a much quicker query.


Q) In Adaptive Server explain the difference between a character(RAW) and a block device(Filesystem) and when they are typically used or could be used

A) A character or un-cooked/raw device writes directly to the disk and guarantees transactions are written to the disk immediately. A block or cooked device does not write to the disk immediately, which can cause the database to lose its integrity should there be a media failure. Character devices are normally used for live databases. Using block devices in Tempdb can in some circumstances improve performance for applications that create a lot of temporary tables.

This is becoming less and less relevant as modern OS’s allow the use of DirectIO to a filesystem device which is almost the same as using a raw device.


Q) Briefly explain what the difference is between a transient and hard 605 error.

A) A transient error:

The Operating system notifies Sybase Server that an I/O operation is complete when it is not. Any time buffer information about a page is different from the disk information about the page (and the disk has the correct information).  ~ Normally a dbcc checkdb and dbcc checkalloc should produce no more 605 errors.

A hard error:

Normally hardware related can be caused by in-correctly formatted disks (overlapping partitions) or disk media failure.


Q) When you try to start the server you get a ‘cannot allocate shared memory’ error – why and what do you do?

A) Probably there is still some shared memory allocated, which needs to be released, it can be identified with shmmax in /etc/system


Q) What are the Sybase isolation levels?

A) Sybase ASE has 4 isolation levels, they are dirty reads, read committed (the default), repeatable read and holdlock which is the ANSI default level. The higher the level the greater the effect on locking.


Q) What type of indexes are available in Sybase?

A) Clustered / non-clustered


Q) How do you display index size?

A) sp_spaceused, 1


Q) What type of index is good for range queries with joins?

A) non-clustered


Q) How much space is required for a clustered index build?

A) 120%


Q) What is index covering?

A) Where all columns requested are present in the index, therefore the data pages do not need to be accessed.


Q) How would you defragment a heap table?

A) You could either create a clustered index then drop the index afterwards or bcp all the data out, delete the data from the table and then bcp it back in again.


Q) Explain the difference between slow and fast BCP and how to apply this.

A) Slow BCP leaves indexes in place and is the default. Fast BCP involves dropping the indexes/triggers for the table in question. In Fast BCP the indexes are recreated after the data has been loaded in. The BCP –A flag can be used to increase the default Network packet size. This can increase the speed of a BCP, however the under-lying UNIX o/s needs to be changed to support the larger packet size.  In addition you would need to allocate additional network memory in the Sybase dataserver which is used by the bigger network packets.


Q) If you bcp’d into a table with rules, defaults and triggers would any of these take effect?

A) Only defaults if fields are null.


Q) What are considerations for very large databases?

A) Sybase ASE 15 vastly improved on the previous limits of 256 32GB devices. You can now have almost infinite storage space or to put it this way you can have billions of 4TB devices! The other considerations for very large database are in terms of timings for backups and maintenance tasks. It becomes much more relevant to look at compressions and also partitioning.


Q) Which databases are created on the master device by default?

A) Master, model, tempdb, sybsystemdb, (sybsyntax).


Q) How would you recover from a corrupt master database?

A) You first need to recreate the master database at the same location as before with the same size, this is done with

dataserver –d <device path and name> -w master –z <page size> -b <size in 2k blocks>

Next you can log into the new master database with the default sa login and configure the backup server.

After this it’s a simple matter of loading in the master database from backup.


Q) How to drop corrupted database?

A) sp_repairdb (‘dbname’,’dropdb’)


Q) How to analyse a badly performing query?

A) First you need to isolate the query by looking in sp_who, together with sp_lock and also any long running transactions in master..syslogshold. You then need to extract the actual sql being run by using dbcc sqltext (spid) and view the showplan from sp_showplan (spid).

You can then run a trace against the spid using set tracefile ‘<path to tracefile>’ for <spid>. You can also then set showplan and set statistics io which also gets output to the tracefile.

Its also a good idea to refer to the relevant mda tables and query metrics. When you have isolated the bad sql text you can start to optimise it either by rewriting it or putting in extra indexes etc.

Another cause for sudden bad performance could be fragmentation of the table, especially if a large amount of data has suddenly been added to a table, this can be fixed by dropping and recreating the clustered index or by running a reorg rebuild command on a DOL or APL table.


Q) What variables would you always be adjusting for a production Sybase server?

A) Would need to increase the memory for the server together with procedure and data cache. Also increase number of users, number of objects and indexes. Possibly increase number of online engines and enable parallel processing


Q) How would you tell if a users transaction is rolling back?

A) dbcc traceon(3604) and dbcc stacktrace(spid), dbcc pss(0, spid) and check sysprocesses (io) is incrementing.


Q) Name some ways you can connect to Sybase using perl?

A) DBD::Sybase (via DBI) is probably the best way.


Q) In transact sql explain the purpose of the having clause and when it is typically used?

A) You use it with a group by, it’s kind of like a where clause for the group by statement


Q) An application frequently suffers from deadlocks, suggest a simple technique for the developers to reduce the deadlocks significantly?

A) Change the relevant tables to row level locking and also look to see if the stored proc is running in an opposite direction to another similar stored procedure. 


Q) There is a high level of blocking occurring on a Sybase server in production, list some of the likely causes and suggest ways of addressing the problem?

A) The main cause is probably a badly written query which is keeping locks open too long, but it could also be due to a very busy server with lots of processes accessing the same table all at the same time. Changing the tables to Row-level locking is a quick win, another solution is to optimise the stored procs/queries and perhaps create some new indexes to help on the load.


Q) What is table partitioning and outline some situations where it can be used?

A) Table partitioning is a way of dividing up a table into chunks, it can be very useful in various situations, one is with a heap table where a partitioned table actually ends up having multiple last pages and therefore cuts down on contention. Another benefit is by cutting down on the load of a table by putting partitions on separate disks to reduce disk contention.


Q) An application is stack tracing on selects with page pointer errors in a busy production environments, the table is suspected to be the cause, outline some steps you would take to resolve this situation?

A) Run consistency checks on the table dbcc tablealloc and dbcc checktable, plus liaise with OS and storage people to check that the underlying disks are ok.


Q) Which dbcc(s) would you run on a regular basis?

A) checkalloc, checkdb, checkcatalog, checkstorage.


Q) What is the difference between checkalloc and checkdb?

A) Checkalloc – checks page allocations – (all allocated are used and all used are allocated) Checkdb – checks page chains, index sorts, index pointers


Q) Name some of the Optimization goals in ASE?

A) mixed, oltp and dss


Q) How can SQLIO be used to improve performance?

A) SQLIO is a tool used to check the performance of the SAN and can be used to improve the disk system which Sybase runs on.


Q) What command/utility can be used to check the state of the stats in a table, new in 15.x

A) You can view statistics and histograms with sp_showoptstats. You can also set the option show_missing_stats to on, the optimiser will then print info on columns with no stats.


Q) How would you migrate a database from one page size to another, e.g from 2k page size to 8k?

A) You can do this in two ways, either using the sybmigrate utility or by running ddlgen to extract all the objects and then recreating them in a new database an dbcp’ing in the data.


Q) What is procedure and data cache – how are they used?

A) Procedure cache is for stored procedures. data cache for cached index and data pages.


Q) What is statement cache and how would you configure it on a Sybase server?

A) Statement cache is set on the server level and caches query plans, to save query compilation time. It is set by sp_configure “statement cache size”, size_in_2k_pages


Q) What would be the cause of the error message, “Could not allocate memory using Huge Pages”

A) It is caused by not having enough Huge pages available on the OS, you can check how many are available with cat /proc/meminfo.


Q) What are some of the other server options would you change after a server build?

A) Number of users, memory, no of locks, no of remote connections, no of devices, no of open databases.


Q) What are the four types of page locks?

A) (Logical) Shared – xact reads page and does not want the rows to change the transaction holding the lock has no intention of modifying. No other xact can modify page.
(Logical) Update – xact has read page and may modify the page later. Update locks are promoted to exclusive locks when the xact actually modifies the page. No other xact can modify page.
(Logical) Exclusive – Transaction wants to modify the page. No other xact can read or modify.
(Address) Lock on non-leaf index pages (logical on leaf nonclustered index pages and data pages)


Q) What is lock escalation?

A) When a  transaction holds 200+ page locks it will attempt escalation to table lock.


Q) Why would a transaction not escalate to table lock?

A) Another process has a page lock, multiple re-reads of the same table will look like a transaction has more than 200 – but each select could have 1 index pages.


Q) What is an engine – how many would you configure – what command would you use to change this number?

A) Unix process – usually affiliated with cpu. Must have less than you have cpu’s. Usually either CPU -1 or = cpu if there are a large number (assumes single dataserver machine). sp_configure ‘max online engines’, ‘engines at statup’


Q) How do you create a server.cfg file – if yours had been deleted?

A) sp_configure ‘configuration file’,0,’restore’,”, or just change one of the config options which will generate a new file.


Q) What is fill factor – why is it important, what would be the result of using 0,1 and 100?

A) Fill factor determines how full each page is at initial build, used on index build. 0 (default) 2 free rows are left on each index page. 100 all index pages are 100% full 1-99 various fills


Q) What is page splitting when does it happen?

A) Will get page split when page is 100% full or max-rows per-page reached. This effectively tries to split one page accross two (50:50), the exception is where it is the last page – or there is a 100 % fill factor.


Q) What is the sybmultbuf process, when would you see them – and how many?

A) Spawned by the backup server when dumping a db – one for each dump or dump strip.


Q) How are database fragments in sysusages linked to devices in sysdevices?

A) The vstart in sysusages falls between low and high in sysdevices, but really just do sp_helpdb and sp_helpdevice, much easier.


Q) If you have the ‘trunc log on checkpoint’ option set when will the log get truncated?

A) A manual checkpoint flushes dirty pages to disk, a system checkpoint will also truncate logs if dboption is set.


Q) If you were asked to take a dump from the production environment and load it into development what steps would you take – or problems would you look out for?

A) You would dump the database copy it across and load it, in earlier versions you would need to ensure the segmaps aligned between the databases but this is not an issue any more. Mismatched User ids and aliases could be a concern as is the size of the databases.


Q) What non-logged operation still allows you to dump and load a transaction log?

A) truncate table


Q) What should your housekeeping scripts do?

A) dbcc, dump, update stats, drop rebuild/reorg indexes.


Q) What does update stats do – why would you run it, how would you check when it had last been run?

A) Updates statistics held about tables and indexes – in particular the distribution, step and density. You’d run it to help the optimiser make better decisions and you can check when its last run you can check in table sysstatistics.


Q) Why would you rebuild indexes?

A) To defragment – save space and increase performance


Q) How would you know how fragmented a table / index is?

A) Run sp_spaceused and compare with sp_estspace


Q) How would you shrink a database?

A) Run alter database dbname off old_dev


Q) How does theserver optimise queries with more than four tables?

A) Tables are costed by grouping into fours, ie abcd bcde, the best outer table is removed, and the normal process is taken. Can be changed by ordering tables biggest to smallest – or changing ‘set table count x’. – max value of 8.


Q) What is meant by reformatting?

A) Where plan is very costly – ie no indexes etc… Optimiser may decide to reformat – create work table, populate with relevant cols, create clustered index, use index.


Q) What is meant by the ‘or strategy’?

A) OR stategy is used only when all columns are indexes, and the query could return duplicate rows. A special work table is created (dynamic index – in showplan) to get distinct values only – using row ids.


Q) When are work tables created automatically?

A) Order by, Sorts, group by, distinct, reformatting, or strategy, other worktables


Q) What is a direct update and how is it different to a deferred?

A) SAP ASE performs direct updates in a single pass, by locating the affected index and data rows, writing records for the changes to the transaction log, and making the changes to affected data and index pages. There are In-place updates, cheap direct updates and expensive direct updates, all better than deferred updates.


Q) What is required for an In-place update (best)?

A) Row cannot change its length, cannot be the key of a clustered index, no joins, not used for ref integ, no trigger on column, no replication.


Q) What causes deferred updates?

A) Update that use joins, Update to columns that use referential integrity, update to the index keys.


Q) Why run sp_recompile on a stored proc?

A) To parse and normalise the sp with new statistics.


Q) What causes a stored procedure to recompile?

A) 1. Table no longer exists with objid or dbid, 2. Has an Index that has been dropped, 3. Any new columns / rules / defaults (sysobjects.schema+1), 4. Does the db have the same crdate, 5. Has db been re-loaded.


Q) Does recompile compile both tree and plan?

A) Just the plan


Q) Which traceflags are good to use for tuning queries?

A) 302(Print trace information on index selection), 310(Show the plan as choosen by the optimizer, as well as I/O costs), 311 (Display optimizers expected I/O cost)


Other questions

Q) If a user db is in log suspend mode, what do you do ?
Q) What if there is a secondary truncation point and that is the oldest tran ?”




Replication server:

Q) What is the Save interval used for pertaining to the Stable Queue.

A) It determines how long the stable queue, DSI, will wait until it deletes information already applied to the replicate db. It is specified in the connection string via parameter save_interval (in minutes). It can be used for redundancy in the event of a replication system failure. The save interval lets the sending Replication Server re-send all messages saved during the save interval. You can use sysadmin restore_dsi_saved_segments to restore backlogged transactions.


Q) What Replication Server parameters should always be increased from the default values.

A) The 2 most important are memory_limit which sets the total memory available to the Replication server and sqt_max_cache_size which sets the limit of cache which is available to the stable queue. It’s also a good idea to cache the system tables and also consider parallel dsi threads. You would also look to adjust the settings for the rep agent, specifically the send_buffer_size and scan_batch_size.


Q) When a DSI Thread suspends due to a transactional error, what are the two methods used to force resumption of the queue.

A) Resume connection to dataserver.database skip transaction

set autocorrection on for replication_definition with replicate at data_server.database


Q) What Replication Server command shows all the configuration parameters pertaining to a Replication Server.

A) rs_configure – run in RSSD for Replication Server, i.e. from the ASE holding the RSSD database.


Q) What is the ID Server in a replication environment?

A) It is the master replication server which holds information on all the other replication servers.


Q) What happens if the ID Server goes down in a multi repserver environment.

A) It runs ok until you try and update any rep defs etc or restart any of the other replication servers.


Q) How would you find what command caused a dsi to go into suspend?

A) Check the rep server error log.


Q) If the error log indicates that dsi is down due to duplicate key error what do you do?

A) First do a sysadmin log_first_tran to output the bad transaction, it can then be viewed with rs_helpexception. If you decided it can be skipped then do a resume connection with skip_transaction.


Q) What is autocorrection?

A) It is an option set on a rep def which converts updates and inserts into a delete followed by an insert, this is useful in dev environments where you tend to sometimes get a lot of duplicate transactions.


Q) How do you determine the time of the last transaction applied to the replicate database?

A) select * from rs_lastcommit (executed in replicate database)


Q) What is the easiest way to completely replicate a database from London to Singapore and New York?

A) Use MSA, database replication definitions.


Q) When was MSA (database replication definition) introduced?

A) Version 12.6.


Q) What is replication monitor?

A) Replication Monitoring Services was introduced in version 15 and replaced the RSM server, it consists of the UAF and the Replication Manager plugin for Sybase central


Q) What are the steps to refresh, from a production database dump, a primary development database which is being replicated to a secondary development database?

A) The steps are as follows:

– Get the details of db configuration. – get sp_config_rep_agent details
– Get gen_id details – run dbcc gettrunc to get gen_id details.
– Stop rep agent with sp_stop_rep_agent
– Load database from dump.
– Configure the rep agent back as per details in step1.
– Dbcc settrunc(ltm,ignore)
– Dump tran database with truncate_only

– (Optional) Dump and load this db into replicate database(s) if you need an exact copy of the data.
– Dbcc settrunc(‘ltm’, ‘valid’)
– Dbcc settrunc(‘ltm’,’gen_id’, gen_id+1 from step 2)
– On RSSD run rs_zero_ltm data_server, database
– sp_start_rep_agent database


Q) Does MSA replication work over a WAN?

A) Yes I think so, I can’t see why not, warm standby works over WAN.


Q) Transatlantic replication New York to London one-way, what are the steps to troubleshoot it if the data is not being replicated across?

A) First step is to check the replication server error log, followed by looking at the replication queues with admin who,sqm. If you are replicating from one replication server to the next then you obviously need to check this on both sides to see where the issue is arising. You also need to check the error log of the replicant database to make sure the database is up and running. Also check that the repagent is up and running on the primary. One quick test is to see if you can connect to replicant database/repserver from the primary to test the communication is up but you would see this from the error logs as well. It could well be that the cause of the issue is a duplicate transaction so investigate that and if necessary resume connection to replicant database with skip_transaction.


Q) What is the command to see how far the repagent has read through the log on the primary?

A) dbcc gettrunc and look at secondary trunk page. Could also look at sp_help_rep_agent and also check on the queues in the replication server admin with who,sqm to see if any are lagging behind etc.


Q) What is the benefit of having 2 replication servers when replicating across 2 countries?

A) It is more efficient as the 2 replication servers talk in a more concise language than a dataserver and replication server do, for example they strip out empty begin commit trans. It also gives better resiliency against temporary network outages and cuts down on network contention.


Q) What command would you use to check which connections are up/down?

A) admin who / admin who is down


Q) What are the stages to creating a subscription?

A) define, activate, valiadate, check


Q) Where would you find the valid logins for the rep server?

A) rssd..rs_users


Q) What must a table have to be replicated using warm standby?

A) primary key

Replication – Set up MSA Rep

The first thing to do is to add the dataserver connection settings into the Replication server interfaces file.
You also need to add the replication interfaces file entry into the primary dataserver interfaces file

You then need to make sure the maint users are created in the primary and replicant servers, add the password details into the .sybinfo file

Next grab the passwords for sybadmin for both servers and also the rs SA and Repagent logins

Make sure that there is no repagent/secndary truncation point configured on the primary, if there is then remove them with sp_config_rep_agent sierra, “disable” and dbcc settrunc (ltm,ignore)

You should now be ready to run rs_init for both the primary and replicant

After the connections are created you need to disable the firing of any triggers on the replicant side with:
alter connection to .
set ‘dsi_keep_triggers’ to ‘off’

Reptostandby does not need to be executed. We only replicate at the table level and not the db level. So DDL changes are not replicated.

So set any individual tables for replication in the primary database with:
sp_setreptable fnxaudittnum, true
sp_setreptable fnxauditdet, true
sp_setreptable fnxauditgrpsub, true
sp_setreptable fnxauditgrp, true
sp_setreptable fnxaudtype, true
sp_setreptable fnxaudit, true

If it is not already running Start up the Repagent on the primary and ensure it is up and running (sp_help_rep_agent) and also check in Repserver.

The repdef, subscription and any function string class scripts etc are stored in directory /sybdba/sybase/dba/replication so create them first, perhaps copy from somewhere else.

Once they are ready execute first the database repdef and function_string class script against the Repserver

Next create the subscription with dump_marker from the script above

Now we need to dump and load the database

Dumps seem to be located in directories under for example /datlib///

so just create a new subdirectory, like sierra_DS16-19, and on the dataserver issue the dump with the following:
dump database sierra to ‘/datlib/cam_sybdmp_uat01//.dmp.s1’
stripe on ‘/datlib/cam_sybdmp_uat01//.dmp.s2’
stripe on ‘/datlib/cam_sybdmp_uat01//.dmp.s3’
with compression=100
go

Replication – Remove MSA/RepAgent

Remove MSA Database

1) Drop Subscription
2) Drop Rep Def
3) Stop Rep agent
4) Disable Rep Agent, sp_config_rep_agent , ‘disable’
5) Check that secondary truncation point is rmeoved
6) Alter connection to set log transfer off

.

Remove a RepAgent

– First drop any subscriptions and then rep defs associated with the primary Database where you want to drop the RepAgent
You can check this with rs_helpsub and rs_helprep formthe RSSD

– Drop any Subscriptions and RepDefs with e.g drop replication definition for a table rep def or equivalent commands for Database rep defs.

– Next disable the repagent from the Primary DB with sp_config_rep_agent sierra, “disable”

– Finally on the RepServer disable replication/log transfer from Primary DB with alter connection to SYBEMA_SIEPCE_UAT_DS02.sierra set log transfer off

Replication check repdef/subscription info

Run these in the relevant RSSD database for the Repserver (ID):
admin config, id_server – To find the Rep ID Server
admin rssd_name – To find RSSD database for RepServer

rs_helprep – Displays a list of table and function repdefs in current RSSD

rs_helpsub – Displays a list of all subscriptions to table and function repdefs in the current replicate RSSD.

rs_helprepdb – Displays list of all replicate databases with subscriptions to primary data in current Repserver

rs_helpdbrep – Displays all database repdefs in the current RRSD

rs_helpdbsub – Displays all subscriptions to database repdefs in current, replicate RSSD.

Tempdb full – recovery

If you have a situation where the tempdb is full then you can normally bind your login to the sa_tempdb or equivalent, here are the steps on how to do this:

– Do a select * from sysdatabases to see if there are any extra tempdb’s configured, if there aren’t any then you can easily create one as follows:
create temporary database to_tempdb on =’50M’ log on =’50M’

– Bind your elevated login to the extra tempdb with:
sp_tempdb ‘bind’, ‘LG’, zec3yit, ‘DB’, sa_tempdb, NULL, ‘soft’;

– Then log out and back in again and you should be able to do sp_who and other stored procs again.

– To try to clear the tempdb look for long running transactions and kill them:
select * from master..syslogshold

– You can also try to abort all transactions in log suspend on the tempdb with this command:
select lct_admin(“abort”, 0, 2)

Kill Logins proc/script

To Kill any logins before doing for example a failover use stored proc sp__kill_all:
e.g.
sp__kill_all
or
sp__kill_all ‘kill_m_all’ — which kills all the user logins on the Server.

If you just want to kill Logins for a particular database, if you are loading into it for example, then there is another stored proc sp__kill_db
e.g
sp__kill_db

.

create procedure dbo.sp__kill_all
/* Copyright (c) 2003 Rob Verschoor/Sypron B.V. */
@p_login varchar(30)
as
begin
declare @p int, @cmd varchar(12), @suid int
declare @kpid int, @spidcmd varchar(16), @status varchar(12)
declare @login varchar(30), @xactyn varchar(3)
declare @rc int, @err int
set nocount on

if proc_role(“sa_role”) = 0
begin
print “You need ‘sa_role’ to run this procedure.”
return 0
end

if @p_login = “?”
begin
print ” ”
print “Usage: sp__kill_all { login_name | ‘kill_m_all’ }”
print “”
print “This procedure kills multiple ASE processes as specified:”
print ” ”
print “Arguments:”
print ” login_name – kills all processes for this login name”
print ” ‘kill_m_all’ – kills all processes”
print “”
print “Copyright (c) 2003 Rob Verschoor/Sypron B.V.”
print “Visit http://www.sypron.nl
print ” ”
return 0
end

select @suid = NULL
if @p_login != “kill_m_all”
begin
select @suid = suser_id(@p_login)
if @suid = NULL
begin
print “‘%1!’ is not a valid login name”, @p_login
return 0
end
end

select spid, kpid, status, cmd, suid, tran_name
into #to_be_killed
from master..sysprocesses
where suid = isnull(@suid, suid)
and suid != 0
and spid != @@spid
select @rc = @@rowcount, @err = @@error

if @p_login != “kill_m_all”
print “Found %1! spids for login ‘%2!’”, @rc, @p_login
else
print “Found %1! spids”, @rc

/* Adaptive Server has expanded all ‘*’ elements in the following statement */ select #to_be_killed.spid, #to_be_killed.kpid, #to_be_killed.status, #to_be_killed.cmd, #to_be_killed.suid, #to_be_killed.tran_name into #tmp_2bkilled
from #to_be_killed

while 1=1
begin
set rowcount 1 — process row by row
select @p = spid, @kpid = kpid, @spidcmd = rtrim(cmd),
@xactyn = case tran_name when NULL then “no” when “” then “no” else “yes” end,
@status = rtrim(status), @login = suser_name(suid)
from #tmp_2bkilled
if @@rowcount = 0 break — exit loop when ready

delete #tmp_2bkilled where spid = @p
set rowcount 0

print “Killing spid %1! (login ‘%2!’; %3!,%4!; active Xact:%5!)”, @p, @login, @status, @spidcmd, @xactyn
select @cmd = “kill ” + str(@p, 5)
exec(@cmd)

— In pre-12.0, CIS can be used instead of exec-immediate. See page 60 in the book
— “Tips, Tricks & Recipes for Sybase ASE” (www.sypron.nl/ttr)
–exec sp_remotesql SYB_ALIAS, @cmd
end

set rowcount 0

— wait a sec and see who’s still there
waitfor delay “00:00:01”

select t.spid, t.kpid, t.status, t.cmd, t.suid
into #still_there
from master..sysprocesses sp, #to_be_killed t
where sp.spid = t.spid
and sp.kpid = t.kpid
select @rc = @@rowcount, @err = @@error

if @rc != 0
begin
print “”
print “%1! killed processes are still around:”, @rc
select spid, suser_name(suid) login, cmd, status
from #still_there
order by spid

print ” ”
print “Wait a while until these processes have disappeared…”
end
end

go

.

.

create procedure dbo.sp__kill_db (@DATABASE varchar (30)) as
set ansinull on
set close on endtran on
set flushmessage on
set nocount on
set string_rtruncation on

if (@@trancount = 0)
begin
set chained off

if (@@isolation > 1)
begin
set transaction isolation level 1
end
end
else
begin
print ‘ sp__kill_db CANNOT BE RUN FROM WITHIN A TRANSACTION.’

print ‘ ‘

return 1
end

declare @cmdstring varchar (255),
@errorchar varchar ( 4),
@id int,
@login varchar ( 30),
@spid smallint,
@spidchar varchar ( 11)

print ‘ ‘

declare killdb_cursor cursor for
select spid, suser_name (suid)
from master.dbo.sysprocesses
where dbid = db_id (@DATABASE)
and dbid > 1
and spid @@spid
order by spid
for read only

if (charindex (‘sa_role’, show_role ()) > 0)
begin
if (@DATABASE = ‘master’)
begin
print ‘ YOU CANNOT EXECUTE sp__kill_db FOR DATABASE master.’

print ‘ ‘

return 1
end
else if (db_id (@DATABASE) IS NULL)
begin
print ‘ DATABASE %1! DOES NOT EXIST.’, @DATABASE

print ‘ ‘

return 1
end
else
begin
open killdb_cursor

fetch killdb_cursor into @spid, @login

if (@@sqlstatus = 2)
begin
print ‘ THERE ARE NO ACTIVE PROCESSES FOR DATABASE %1!.’, @DATABASE
end
else
begin
print ‘ SPIDS WILL BE KILLED FOR DATABASE %1!.’, @DATABASE

print ‘ ‘

while (@@sqlstatus = 0)
begin
select @spidchar = convert (varchar (11), @spid)

select @cmdstring = ‘kill ‘ + @spidchar

print ‘ KILLING SPID %1! FOR LOGIN %2!.’,
@spidchar, @login

execute (@cmdstring)

fetch killdb_cursor into @spid, @login
end

print ‘ ‘

if (@@sqlstatus = 2)
begin
— execute sp_status @PARM=@DATABASE
print ‘ ALL APPROPRIATE PROCESSES HAVE BEEN KILLED.’
end
else
begin
select @errorchar = convert (varchar (4), @@sqlstatus)

print ‘ CURSOR ERROR. STATUS = %1!.’, @errorchar

return 1
end
end

close killdb_cursor

deallocate cursor killdb_cursor
end
end
else
begin
print ‘ ONLY A SYSTEM ADMINISTRATOR CAN EXECUTE sp__kill_db.’

print ‘ ‘
end

go

Elevate a user’s permissions for a proc

If a user has a proc which needs to do for example a truncate within then the user needs ot have elevated rights, typically dbo owner, to work, if thi si snot possible then you need to use a wrapper stored proc to temporarily elevate the user’s access while running the proc.

The steps are as follows:

– create role sii_role with passwd ‘********’

– grant role sa_role to sii_role

– grant role sii_role to — this gets added as inactive by default so needs to be set to on, i.e its safe

– Create the wrapper stored proc, e.g:
create procedure iSP_bcp_ts_gross_trade_audit_wrap
(
@replay_mode char(1) = NULL –The same variables as in the stored proc being called
)
as
begin
/*
This proc calls SP_bcp_ts_gross_trade_audit.
Call order is:
iSP_bcp_ts_gross_trade_audit_wrap
iSP_bcp_ts_gross_trade_audit
*/

set nocount on

— enable sa_role, DBA’s to replace BLAH with real password and run sp_hidetext
set role sii_role with passwd “********” on
–We have sa_role so run original code
exec iSP_bcp_ts_gross_trade_audit @replay_mode
— disable sa_role
set role sii_role off

return 0
end
GO

– sp_hidetext iSP_bcp_ts_gross_trade_audit_wrap

– grant execute on iSP_bcp_ts_gross_trade_audit_wrap to

– The user can then execute this new wrapper stored proc as they did the old stored proc and it should work fine.