PD Sybase Admin Doc 2/2 By Ed Barlow 
Author Message
 PD Sybase Admin Doc 2/2 By Ed Barlow

This doc should follow part 1/2
It is copyright (c) 1995 by Ed Barlow as is part 1/2
Enjoy and use it wisely

:)
Ed
----- CUT HERE ------------------------------------------------------------

NAMING STANDARDS

Sybase supports object names up to 30 characters long so there is no
particular need to use abbreviations.

UNIX

Hostname

Systems are typically named either according to some standard or according to
some theme (planets, cars, states...).  The naming standards I have seen
usually involve concatenating business unit, location, role, and a number.
For example, the name  slnydev1 could represent the New York sales development
#1 server and  slsfprod2 could represent the San Francisco sales production #2
server.  A standard of this sort is usually only used in larger installations
because most administrators prefer names that are fun so long as the number of
servers is small.  A final standard that can be used in small shops is simply
to name the system after what it is doing (sales, dev1). System names are in
small letters.

Aliasing of hostnames can be used to have multiple names for a given system.
The internal name is only used by scripts on that machine, so if your server
is named cheqprod1 (Chicago production equity server), the users might simply
refer to it as equity1.  Remember that most users of  Sybase applications
never see the system name except in their PC setup (which an administrator
will probably do) so the name is of limited importance.

If a failover configuration is available it is important to be able to
differentiate between the primary (active) system and the secondary (failover)
system.  Names will vary depending on the failover software used.  If your
software guarantees that a certain name will always be primary, use something
like sysa and sysa_backup.  If you cant guarantee this, do not name the
servers in this manner.  It is very confusing when the server saturn_b is
primary instead of saturn_a, and can lead to errors.

Kernel

A recommended naming convention for kernels is to concatenate keywords with
the plus sign.  For example, SYBASE+DBE or SYBASE.

Sybase Setup

Server Names

Naming requirements of Servers are similar to names of systems.   Server names
should be in capital letters.  One server machines, the Server should be the
same name as the System (in capital letters).


Sybase Disk Devices

Sybase disks should have names that help administrators.  Note that it is
difficult to change disk device names once they have been set.  If the
administrator can guarantee disk stability (i.e. usually), names like sd4b or
c1d1t2s3_log are great. The other standard is to name the devices with names
like datadevice and logdevice.  The purpose here is to help the administrator
spread data and logs between disk devices easily.

Devices placed on UNIX files should never be placed in the Sybase home
directory.  This directory should contain only Sybase provided files.

Sybase Dump Devices

There are two types of database dump devices: full and transaction log.  The
naming standard that I have seen commonly used is <database>_dbdump for full
dump devices and <database>_logdump for transaction log devices.  I recommend
creation of both of these devices for each database created even if they will
not be used.  If the devices are created, it is an easy matter to have a
script that autoconfigures itself to dump all databases (except model and
tempdb) every night and another that autoconfigures itself to dump all
transaction logs of databases that have truncate log on checkpoint off.

The dump devices should point to files that the administrator can find easily.
One standard that works well is to have a directory /dumps (in the root
directory).  This directory can be a symbolic link to whichever disk you wish
the disks to be placed on.  In that directory should be two directories for
each server.  One will be named <SERVER>_dumps and the other <SERVER>_logs.

Login Names

My personal favorite naming standard for logins is <first letter of first
name><last name>. This seems intuitively better than names having the opposite
order and is clearly superior to names that have no intrinsic relation to the
actual user (admin01, mars).  Another naming convention is
<lastname>_<firstname>.  This convention is a bit wordy but does alleviates
some confusion.

All accounts should be named for the primary user.  Use of group accounts
violates a basic tenant of security (be able to identify who does something).
While it is work to add a large number of users, in the long run you will be
glad you did it.

While much has been discussed about cases where multiple users have the same
name (two John Smiths...), these cases are actually quite rare in real life.
If such a situation arises, simply change one or both of the names by adding a
number to it (i.e. jsmith2).

Database Names

Database names should usually be obvious based on the type of data in that
database.  As with all Sybase names, 30 characters are allowed, and long names
are preferred.  Database names should be in lower case and should not include
the word database or the initials db.  One note, sa should never be the only
dba for a database.  This not only implies that the sa password is widely
available, but that you will never be able to delete that database because you
will forget why you set it up and will never be able to contact anybody to
remind you (it happens).  The use of a group account (salesdbo...) as dbo will
lead to the same problem (who should you contact in a year when the database
finally runs out of space...

Sybase DDL

The following suggestions are made regarding Sybase DDL names.

Table

Tables should be named with the common English noun that represents a row of
data.  Table names should be singular and lower case (i.e. use person instead
of people).  The table name should not contain the words file or table.  Many
organizations come up with a list of synonyms for keywords (i.e.. use yr for
year and qty for quantity).

Trigger

Triggers should be named <tablename>_<action>_trigger, where <action>
represents ins, del or upd.

Views

View names should be defined exactly like table names, but should be prefixed
by a v_.

Stored Procedures

Stored procedure names should start with a prefix (like ap_ for application
procedure, pp_ for Powerbuilder procedure, sp_ for system procedure...) that
allows broad categorization of the procedures.  I have found names with
prefixes easier to read than those with suffixes (like _proc).  Plurals can be
used in procedure names if it is relevant.  For example, get_row and get_rows
mean different things intuitively. One recommended grammar is
<prefix>_<verb>_<adjective>_<table> that leads to procedures like
ap_get_all_orders, ap_upd_instruction, and ap_get_tall_people.

It is useful to break up procedures that are used by front ends like
Powerbuilder from those with a more general purpose.  Stored procedures that
are used closely with front ends typically belong to one screen only and will
be changed when that screen changes.  If the developer knows the procedure is
not used elsewhere, the change will not impact the environment as severely.
Add the screen name to the grammar for front end stored procedures.  A good
grammar here is <prefix>_<screen>_<adjective>_<noun>, which will lead to names
like pp_ord_get_ord, which may look a bit redundant, but actually is quite
clear in real life (a Powerbuilder procedure that gets orders and is called
from the orders screen).

Columns

Columns should be named with the common English noun representing the data
element.  The units of the element should be discernible from the name if
necessary.  Column names should never be reused for another purpose within the
same database (this is very confusing) and all columns that have the same name
should have the same definition (char(10) not null) to prevent errors when
joining data.  It should be assumed that any columns named xxx can be directly
compared with any other columns named xxx.

Other DDL

User datatypes should be in lower case and have a tp_ prefix.
User rules should be in lower case and have a rl_ prefix.
User defaults should be in lower case and have a df_ prefix.

CHANGE MANAGEMENT

Systems Control

The only way to explain this is that developers MUST not have access to
production systems.  This is a cardinal sin.  This means that developers have
their own system, and production users have their own system.  If a unit test
or staging system is desired, it should fall under the realm of the
developers.

The DDL for each server should be saved separately.  This becomes more
complicated if a system that extracts DDL directly from the modeling tool
(Erwin...) into the database is used.  It is recommended that DDL be exported
to files in all production and staging systems.

Source code control is a big issue with database systems.  It becomes much
more complicated when multiple versions of the same database are placed in
different locations.  The only way to not get your database layouts fouled up
is to develop a plan for application upgrades to each database.  It is
recommended that the development database always be in sync with the most
current version of the model.  The test and production databases should be
upgraded only when developers request a turnover..

Use of a source code control system is essential in multi user development
(especially front end development).  Many tools such as Powerbuilder have
source code control built in (Bravo!).  Source code control for the DDL
(tables and stored procedures) needs to be standardized, but it is not
essential that a formal tool be used as it is rare that multiple developers
will work on either the model or on a single procedure.  Usually, the author
of each procedure will be in charge of making changes to that procedure, and a
rigorous procedure is not necessary.  Courtesy, commenting changes, and making
nightly backups of the source directories is essential to an ad hoc source
code control system.

Documentation of procedures is one of the overlooked concepts.  I have had
good luck with a word document template in which a brief summary of each
procedure (name, author, what it does, parameters...) is rigorously
maintained.  Of course, the DDL files need also be updated, so many people try
to define an automated mechanism to document their procedures.  These
mechanisms are rarely used and I have found that NOTHING beats a nicely
formatted (bold, multiple fonts) manual page for readability.  As a
consultant, I ALWAYS turn over quality documentation...  It is worth the time
to do the documenting twice.

One final note, a representative of the users and all developers should be
notified of turnovers.  Lack of developer communication has resulted in
numerous problems that should never have occurred (i.e. developer a does not
know that the code has changed and is trying to fix a problem).  Electronic
mail regarding the fact that the turnover occurred and mentioning exactly what
changed is important.

Control of Clients - Software Distribution Issues

It is important to manage client software distribution.  This can be a thorny
issue when you have hundreds of clients for an application.  It is suggested
that large applications set up some automatic software distribution system.
For performance reasons, clients generally run applications off of the local
hard drive.  Some method of checking the applications LAN server and copying
new software if there has been an upgrade is generally advisable and will save
you from a version control nightmare.

Use of Views To Aide Change Management

One good method of managing versions is through the use of views.  Basically,
each table will be represented by a view that selects all columns from that
table.  The view for the table xxx would be named v_xxx_01.  When an
underlying table is changed, all the views are changed so that they function
identically.  For example if xxx had the column y and z, the initial view
would be select y,z from xxx and would be named v_xxx_01.  If column p was
added and column z changed name to t, a new view v_xxx_02 would be added
(select x,t,p from xxx)  and v_xxx_01 would be changed (select y,z=t from
xxx).  Application logic would not need to change at all.

PERFORMANCE TUNING

Hardware Tuning

Generally tuning systems for optimal performance is a simple matter.  If you
spend more money, you get more performance.  There are, however, methods of
cost effectively tuning for performance.  In order of importance to overall
system throughput, I would:

1. Buy many small disk drives instead of a few large disks.  This will
minimize IO contention.
2. Buy extra disk controllers to eliminate yet another IO bottleneck.
3. Buy additional ram.  Theoretically, under System 11 one can cache the
entire database if one purchases enough ram.
4. Buy a faster CPU.  The problem here is that most vendors do NOT sell cheap
chip/motherboard upgrades, and purchasing a faster CPU often involves
swapping out the old system.

Sybase Tuning

The first trick to tuning your system is to identify the essential
characteristics of that system and to identify potential bottlenecks.
Consider if your system is read or write intensive.  You can often add
indexing to speed things up on read intensive tables.  The same is NOT true
for write intensive tables.  The next thing to do is to analyze your system to
decide where to tune.  Tuning an IO subsystem is only useful if your system is
IO limited.  It is best to decide which

Spend some time optimizing disk device layout.  It is an easy way to increase
your performance.

Avoid cursor processing unless necessary.  It is generally faster to process
batches than row by row because that is how the system is tuned.  While loops
are especially slow.

Avoid long indexes or unindexed tables that have more than a few rows.  If
there are any null fields in an  indexes add 5 characters to the index length
plus one per field that allows  nulls.  Generally null fields in an index are
not desirable due to data distribution issues.

Make sure your statistics are updated with information that reflects peak
business usage.  If you have a table that starts each day with 0 rows and ends
each day with 2000 rows, the statistics should be updated somewhere between
1000 and 2000 rows.

Tables that are constantly updated or inserted into can be come very
fragmented.  Periodically rebuilding indexes to these tables can save space
and increase performance.

Make sure that transactions do not linger in the database longer than is
necessary.  Any application using Sybase transactions should perform as fast
as possible.  Transactions hold locks, which can block other users and slow
the system dramatically.

Triggers should only be used in rare circumstances.  It is preferable to
restrict access to the database to stored procedures than to use triggers.
Remember, triggers are NOT compiled, and are therefore slow.  Triggers also
are by definition a transaction.  Avoid long, complex, triggers.

Use showplan and statistics io as much as possible on slow queries.  Look for
table scans and deferred updates (tmp tables being used).

The optimizer can only handle 4 tables at a time (at least under Pre system
10, but I do not think this has changed).  This means that a 5 table join is
performed as a four table join, with the results being joined to the 5th
table.  As the optimizer will use a temporary table, it is preferable for you
to explicitly create the temporary table.

SECURITY

Three levels of security are required to prevent unauthorized access to
applications and data.  The first level prevents unauthorized access to the
network.  The second level prevents normal users from viewing data that they
should not have access to, and prevents them from damaging the systems.  The
final level deals with knowledgeable users (like an administrator) who know
how to circumvent procedures.  Be aware that the Client Server environment is
intrinsically insecure and knowledgeable hackers can break security  (and will
be able to do so until some network encryption like kerberos is widely
available).  All you can do is be aware of the risks and watch for odd
occurrences.

Unauthorized access to the network is prevented by restricting dial in access
to the network.  There are a variety of products that secure the network.

Attacks by individuals who have access to the premises are much harder to
protect against.  Password security is a good front line method of securing
data on systems basic  security procedures should foil most unauthorized
access attempts, provided users follow normal security procedures such as
logging out at night, using  passwords that are hard to crack, locking their
terminals when they are unattended for periods of time, and not writing their
passwords where anybody can find them.  Any system that someone might wish to
break into must have additional security customized on an application by
application basis.  Sybase security is very flexible, and sensitive parts of
the database can be restricted to a given set of users at the database, table,
or even column level.  Setting up a Sybase security system (with groups,
grants and revokes) can prevent damage by normal users who have managed to
circumvent your front line defenses.

Both UNIX and Sybase have the concept of a trusted user who can perform
maintenance and who can fix the system if it breaks.  Normally access to these
trusted accounts is restricted to a small group of individuals who act as
administrators.  The UNIX / Sybase trusted accounts are as follows:

Account Name Account Description Frequency

root UNIX administrator.  Required for maintenance 1 per machine
Sybase sa Sybase administrator.  Required for maintenance 1 per Sybase server
dba database administrator.  Required to grant database access 1+ per database

UNIX Security

The UNIX end of your system should be really secure because most production
Sybase servers do not permit logins by normal users.  This being the case,
your security is all but guaranteed.  It is also good practice to run some
UNIX Security tool (e.g. cops...)  nightly.  These tools are widely available
and are free.  At a minimum, they can detect break-ins after the fact.

The following is a list of additional recommendations that should be
considered to keep your system secure:

  Limit root and Sybase sa access to administrators (under 4.1.3 - use the
wheel group).
  Disable direct logins as root or Sybase except on the console, forcing
administrators to log in as themselves and to then super use to the
privileged account.
  Do not leave the machine in an insecure location.  It is easy to break
security on an unprotected system.
  All Sybase devices should be read protected from users (especially under
4.9 or earlier).
  Prohibit rlogin and rsh.  These facilities bypass password security
(/etc/hosts.equiv, ./.rhosts).
  Protect your database dumps if there is confidential information in them
(and your tapes).  Anybody could simply ftp a readable dump to another site
and load it into their own server - gaining access to everything without
ever accessing your server.
  Production Sybase servers should not import NIS password maps from the
network.  Importing of these maps will give developers login access to
these systems.  Developers generally will not need login access to UNIX
Production servers.   Why open the security hole if you do not need to.
  Be careful that Sybase servers should not have any files in the Sybase home
directory that are writable by group other.

Sybase Security

The very nature of Sybase makes for a large number of potential security
problems.  Many administrators neglect security and allow login id = password
as a default.  If there is ever a problem, sloppy procedures can land you in
trouble.  Basic precautions are important.

  When setting up new accounts, do not be sloppy and use easy passwords.  It
is recommended that passwords contain at least 4 characters and at least 1
number.
  Check for trojan horses (sp_who in a user database).  This is mostly
important under 4.9 when a trojan horse can easily get the sa password.
  Check for procedures without entries in syscomments
  Check for any procedure or view access to master..syslogins in Pre system
10 servers.
  Lists of user access to business data should be reviewed by the business
user on a monthly basis.  The reports do not need to be fancy, but should
be regularly sent.
  Limit dbo access on any production systems to the production
administrators.
  Developers should only have access to production data through predefined
test accounts that can not change data.
  Do check for obvious passwords.  A good password for the login ebarlow is
not ebarlow.  Under system 10, it might not be bad practice to use some
form of  Crack program to try to break password security.

Using System 10 Roles

[ NEED INFORMATION HERE ]

Application Security

Application security is a complex issue.  Each application should develop a
set of internal audits that guarantee that nobody is misusing the system.  The
effort expended here should be proportional to the damage that could be caused
due to a problem.  The other important thing to remember here is that Sybase
does NOT allow users to change their passwords unless they have a tool like
isql (which is probably not desired).  If users should be able to change them,
the application must code up the front end for this.

How to force users to set their passwords

Some applications start all users with the same password (which need not be
known by the users).  When the application logs in for the first time, it will
test for that password and, if they match, will ask the user to input a new
password.  This allows users to set their own passwords and allows
administrators to deactivate accounts that have not been used.

In conjunction with the above idea, the application could track when users set
their passwords.  This would allow the application to manage expiration of
passwords and to force users to reset their passwords in a safe, secure
manner.

How to Preclude Use Of Other Front End Tools (isql)

One common idea is to have the user's password be something different from the
server password (programatically encrypt it in the front end). The method of
encryption need not be complex, so long as the users do not find out (i.e. you
could just change the case of the password...).  This will ensure that users
can log in to the server through the application, but will make it virtually
impossible to log in using another front end like isql.

What To Do In Addition To Sybase Security

Applications should provide their own level of security.  For examples, if the
application relies on stored procedures to make sure the books balance, a
little checking procedure could be run every night to make sure that nothing
funny happened during the day (i.e. a user hacked in some data).  This is
not really the administrator's department, but as the chief system cop, it
would not hurt to make this kind of suggestion to the application developers.
It really is good practice to have a debug_application stored procedure.

How To Give Different Users Different Looks And Feels

It is easy to develop a security table that is read by the front end.  This
table could define which menu items each user has access to and which
functions the user can use.  It is generally good practice to allow setting
and disabling of each menu item if you are going to allow any control at all.
There are two ways to guarantee that users have rows in this application
security table.  One way is to place an insert and delete trigger on sysusers
and sysalternates (this does work and I have done it) to manage that table.
When sp_adduser or sp_addalias is run, a permissions row is added.  You could
also write a stored procedure that is run when the application is started.
This procedure will return the appropriate permission information, adding rows
based on the defaults if none exist.

Forcing The Users To Log Out At Night

Users will leave their application logged in to the system when they go home
at night.  This is a big security hole (to say the least).  It is easy to use
the applications idle timer (Powerbuilder for example has one) to log user out
if inactive for too long.  Sybase provides no server mechanism to disconnect
clients (except for a reboot which is not necessarily desirable), so if a
client mechanism is available, use it.

BACKUPS

Every night a set of scripts is run on each server that will perform routine
administrative.  These scripts are generally known as  "operations scripts".
These scripts are responsible for running backups and for running  a program
known as DBCC  (database consistency check) on each database to check for data
integrity problems.  There are several issues that deal with backups, most
notably the fact that Sybase does not provide you with them.

The following actions are generally performed by Operations Scripts as  part
of normal Nightly Procedures:

        Dump Database
        Object Level Dumps (optional)
        Dump Transaction (usually runs all day on 5-15 minute interval)
        UNIX Tape  Backup
        Update Statistics
        Rebuilding Indexes
        Recompile Stored Procedures
        DBCC & Filter Output
        Server Reboots
        Failover System Management (loads of full dumps and log dumps)
        Browse Sybase Errorlogs

Backup and Recovery Procedures

Generally, Operations Scripts are shell scripts and are run out of cron, the
UNIX scheduler.  Many larger shops have, however, purchased scheduling
packages that are significantly more robust than cron.  Recently, I have found
several shops that have written the majority of their scripts in the perl
language (with sybperl extensions).  Perl has many features that shell does
not, and is not a bad choice for your scripting language.

The Operations Scripts are generally composed of several modules.  One module
should dump transaction logs.  Another should manage full database dumps to
disk and manage runs of the dbcc program. If necessary due to failover setup,
A third module should ftp files from  the production systems to any fail over
systems. A fourth module should provide a method to load log dumps into this
backup server (this is difficult to do by hand).  A final module should be a
driver module that runs a configurable set of Nightly processing scripts to
occur in the appropriate order.

The output of the results of the nightly jobs should be reviewed by operators
(either print the output or copy it to a common directory). Any problem
discovered while the nightly jobs are running should alert the appropriate
error notification system.  When an alert is trapped, the operations staff
will contact the appropriate systems administrator who will resolve the
problem.

Sequencing the operations scripts is important.  It is important to ensure
that your backups occur during the required window of processing time.  This
may be difficult on large database systems.  Dbccs take the longest of any of
the processing steps (unless you are performing extensive bcps).

Processing Steps

There are several ways to perform backups, but the best is probably to dumping
each Sybase database to a UNIX file.  A UNIX file provides a high level of
recoverability because it is readily available and therefore  the files can be
loaded much faster than tape (you usually have to find the tape).  It is
recommended to have 2 full dumps on disk at any point in time (in case one is
bad).  Dump files should be stored with a datestamp tacked on to the end (i.e.
name the file xdb_dump_Jan_11_1990_11:33PM instead of xdb_dump).

Dump files are large.  There is no reason why a dump file should not be
compressed.  Regular UNIX compression does a good job.  GNU provides a free
compression program (gzip) that is better and faster.  Be aware that
compression of files requires that the file system have significant extra
space.

Once all databases are dumped to tape, dbccs should be run against each
database.  It is recommended that dbcc checkdb, dbcc checkcatalog, and dbcc
checkalloc be run against each database every night.  After the dbccs are
run, their output should be filtered and if any errors occurred, an error
notification system should be called warn operators.

DBCC will result in spurious warning messages when a database is not in single
user mode.  There are two methods for dealing with spurious messages.  Method
one is to rerun the dbcc command and then diff the output using the UNIX comm
command.  Method two is to look for particular messages and then just rerun a
checktable on the tables that are shown to be corrupted by the first pass.

Any special processing (update statistics, dbcc object level dumps...) should
then be performed.

A program should then scan the Sybase error log and mail any new messages to
the administrator.

After the dbccs finish, all UNIX file systems (which include the Sybase
dumps) should be backed up to off-line media (tape).

In 4.9 and earlier, it was only possible to perform a single operation per
server at once (i.e. you could not dbcc multiple databases at once).  DOES
SYSTEM 10 CHANGE THIS???

Another procedure should be in place to automatically dump transaction logs
for any databases that need it (this can be autodetected in the database).
Like database dumps, transaction log dumps should be saved with the datestamp
tacked on to the end.  Be sure to name the dumps in a manner that facilitates
loading the log dumps in order.  Failure to load them in order will result in
the server rejecting the load, an error that is almost impossible to
comprehend or fix.

Tapes

Tapes should  be labeled clearly and should be stored in a secure tape library
(do not scrimp here if you are doing it yourself).  It is important to define
a tape procedure.  It is also important to be aware of the number of times
your tapes can be written to before they fail.

How long to store your tapes is a matter of business policy.  Because tapes do
go bad, it is recommended that older tapes be used for permanent storage.  An
example of a tape policy is the following: Nightly tapes should be kept for at
least a week.  End of week tapes will be kept for 6 months.  End of Month
tapes will be kept on site for 6 months and then stored permanently off site.
Your procedures will vary.

A few more notes on tapes.  If you are using the UNIX dump and load command,
be aware that these commands are not intuitive (to say the least) and it is
probably good to have written procedures on how to work the loads.  I have
found that the data center operations staff can usually mount a tape within 30
minutes in an emergency, but this time should be included in any documents
relating to recovery that you might write.  Random checking of your tapes for
readability is also a good idea.  I have encountered at least one situation
where the tape that was to be loaded was blank.  When you need your tape
backup, you should be sure it is there.

Issues With Backup Procedures

If an application consists of multiple databases, remember it is possible to
have dumps from different times yielding referential integrity problems (i.e.
the app database refers to stock ABC and the stock database has no such record
in it).  This can happen if your dumps occur at the same time any load is
occurring.  It is important to understand the application requirements, and to
be sure no loads are being performed during the database dump window.

OTHER

Killing SQL Server Processes

The normal way to shut down a SQL server is to issue a shutdown command.  If
any user processes are running, the shutdown will wait until they are done,
and you will need to issue a shutdown with no_wait.  This will usually stop
the server.  If it does not, you should issue a UNIX kill -15.  Kill -15 is a
special kill that the server will interpret as a shutdown with no_wait
internally.  Only if the kill -15 does not succeed should the administrator
consider issuing a Kill -9.

Sometimes when the Unix Kill command is issued, the dataserver will leave
stray UNIX dataserver engine processes running on the machine.  These
processes hold on to their shared memory and can cause the UNIX operating
system to swap.  To find engines that belong to no server, simply look for
engines owned by /etc/init (process id 1).  These should be killed.
The reason to shutdown a SQL server using the shutdown command is
straightforward.  The shutdown command checkpoints transactions in all
databases.  This simplifies the servers restart because it will not need to
sort through the transaction logs of these databases.  A manual checkpoint
will do the same thing.

How to Break Sybase

There is a simple way to break Sybase that all administrators should be aware
of.  Log in to the server and type begin tran.  Leave yourself logged in.
Voila.  Eventually the transaction logs for the database in which you reside
will fill up because the dump transaction command can only clear the logs up
to the point of the first uncommitted begin transaction. If you see
transaction logs filling up, long running transactions are one of the first
things you should check for.

This implies several things about how applications should work with Sybase.
No application should hold transactions open for long periods of time.  One
option to check for is the use of the autocommit option in Powerbuilder.  All
Powerbuilder applications should set their Autocommit option to TRUE so
transactions are not held open.  If they dont,  a user logging in to Sybase
through Powerbuilder and then doing NOTHING will cause transaction logs to
fill up.

ISQL

Isql is a pretty basic tool and has many shortcomings but it has one wonderful
advantage.  It is free.  Here are two things to know about isql when used in
scripts.

To use isql in a shell script try using input redirection as follows:

         isql -Ulogin -Ppassword  <<EOF
         use xxx
         go
         sp_who
         go
         EOF

To prevent the password from being displayed from isql (it is not in all
versions - recent versions have fixed this security hole).

         isql -Ulogin <<EOF
         password
         use xxx
         go
         sp_who
         go
         EOF

STORED PROCEDURE LIBRARY

I highly suggest administrators pick up my free stored procedure library.  It
enhances the procedures that sybase provides and is VERY useful.  It has been
published several times through comp.databases.sybase.  It can be downloaded
from the following web site:

        http://www.***.com/ :2080/Sybase/barlow

FINALLY (A PLUG FOR MY PRODUCT)

This plug included only for the listserver mailing list and if you really
dont like it i will take it out of the comp.databases.sybase posting that I
will put together.  I know it cheapens the rest of the document (is this
internet guilt?) but i wrote the document primarily because I felt like it
and which I am giving away for free so i feel entitled... I have spent the
past year (full time) developing SQL Administrator, a PC and UNIX Sybase
Administration tool.  This is the first tool that I have seen that was written
by an administrator for administrators.  It beats the pants off anything
else on the market.  We have really been stopped cold on the sales end
though (is there really no distribution mechanism at all for Sybase tools?)
and so I am tacking on a brief description here.

The software is available for all versions of Sybase.  It is currently
available for SunOS and Solaris, but the UNIX agents can easily be ported to
any hardware with a recompile and a minimal shell script testing (easy).  I
have additional product documentation that I will send if you send me a fax #
and I can send you a sales package by snail mail if you are interested.

SQL Administrator
The Sybase Enterprise Administrator

SQL Administrator is a full featured tool that assists Sybase administrators
in all aspects of their work.  Sybase administration relies on significant
manual intervention to discover and correct problems, which leads to costs
not scaling well.  The cost to administer 20 servers is twice the cost to
administer 10 servers. SQL Administrator allows automation of many of the
manual tasks that are currently performed, and provides administrators with
the information they need to make their environment more stable and perform
better.  This is a key design philosophy.  SQL Administrator does NOT help
you type "create database", which seems to be the point of all the other
"so called" Sybase administration packages on the market.  I never had a
problem typing my own commands and do not see the point of these packages.
SQL Administrator WILL give you the information you need to administer your
systems in the form you need it.

Our UNIX agent software collects information from the master database,
the database system tables, the Sybase, Unix, and dbcc errorlogs, and
discovers basics about your unix configuration into a "master master" database.
Administrators then access this data from a nice powerbuilder (as of 1/96 PC
only - X version available shortly).  We provide many reports and analysis
tools that allow you to easily understand what is going on in your environment.
Additionally, SQL Administrator provides a bunch of utilities that you are
probably developing yourself to manage your environment.  These are
professional versions of the utilities you need to administrate Sybase well,
utilities often reinvented by administrators.

The tools and reports in SQL Administrator are tailored to your Sybase
environment in a way no other competing products provides, and SQL
Administrator offers many features available nowhere else.
The following is a brief synopsis of features:

I.    Use over 70 Configuration Reports to give an enterprise wide view of
your environment (available through no other products). This helps you
understand and manage your multi server environments.

II.   A Real Time Multi Server Monitor provides color coded views of the
state of your UNIX and Sybase Servers. SQL Administrator allows you to drill
down and discover details of any problems.

III.  Graphical Historical Reports show server statistics and space usage over
a period of time you define, allowing planning for growth and usage analysis.

IV.   SQL Administrator Compares Servers & Databases, reverse engineers DDL,
and can derive the statements used to set up the server.

V.    Auto Discover Systemic Design Weaknesses such as bad indexes,
inconsistently defined columns,  and poorly indexed tables.

VI.   Identify orphaned rows (detail rows without parents) in your databases
with our Referential Integrity Checker.

VII.  An Integrated Batch Scheduler and Error Handler automate nightly
batch jobs and provide custom facilities to intelligently update statistics,
bulk copy data out, and rebuild fragmented indexes.

VIII. SQL Administrator tracks Unix and Sybase error logs, filters dbcc
output, and identifies space and mirror problems allowing you to Proactively
Manage Problems.

IX.   Our Security Auditing Facility finds Sybase security holes.

X.    SQL Administrator consistently performs login administration on
multiple servers based on predefined templates.

SQL Administrator is reasonably priced (1500$/server plus $500/client for
a single server installation until March 1 1996 - discounts available for
large installations) and non intrusive.  If you are interested in further
information,  please contact SQL Technology at (212) 366-1137.

SQL Technologies
 Phone: (212) 366 - 1137
 Fax:   (212) 691 - 1193



Fri, 17 Jul 1998 03:00:00 GMT
 
 [ 1 post ] 

 Relevant Pages 

1. B:Oracle8 Admin.Pers.Ed.Student Guide I+II

2. Need help MS-DOS PD to NT 4.0 PD DBs

3. PD Case Tool or PD ER Designer

4. pgsql-server/doc/src/sgml admin.sgml diskusage ...

5. pgsql/doc/src/sgml Makefile admin.sgml develop ...

6. sybase/redhat linux 5.2 install problem- port 7100 definition - sybase-problem.doc (0/1)

7. Sybase 11.9.2 - sybase central for not-admin-user

8. PD X window GUI for Sybase?

9. Standard Edition Performance Problems (Stn Ed vs Ent Ed)

10. Differences between Developer Ed. and Ent. Ed.

11. Copy from Desktop Ed to Enterprise Ed failure


 
Powered by phpBB® Forum Software