Debate: 2,000+ user's on Sybase 
Author Message
 Debate: 2,000+ user's on Sybase

Having sparked something of a debate on this subject, I have decided
to share some responses to date that I have received, with those of
you out there who might be interested in the information.  Feel free
to jump in and speak your mind, I will maintain this thread for a

I imagine there a number of folk's in the same postion as I am today,
migrating Sybase apps onto an enterprise scale.  As a System's
Integrator/CS Architect, the task seem's daunting on the Sybase

Nonetheless, the business case for changing the corporate-wide dbms
platform within Fortune 500 is often beyond the scope of what we as
technician's can influence or change.  From the client's perspective,
the cost of gutting, retooling and retraining infrastructure support
on a wide scale is prohibitive.  Of course, many have already made the
argument here against perpetuating a possible mistake.  There's no
easy answer.

Thus, I will attempt to forge ahead seeking a systems architecture for
a large-scale global Sybase implementation.  To date, my requirements
have grown to 1,500 users in NYC with another 500 global remote
accounts.  This app will in time be rolled out to an audience of 10k+

We are currently in the architecture requirements stage of planning,
with an eye towards data and transaction modeling.  The client app is
Smalltalk/Parc Place.  Large-scale MP Sybase Unix server's with Repl
server (when it work's entirely) is still the preferred approach.

Following is the original case, after which I have attached some

Q: To date, we have only run smaller configurations of OS/2, NLM and
Sparc/SunOS of Sybase 4.9.2.  We are currently looking at a project
for 1,000+ users.  No model figures yet on trans rates, etc.

I hope to solicit general advice and recommendations from those of you
that have deployed systems of this scale.  

Our current thinking is as follows:

We figure Unix, on a large MP Sparc or Alpha, we have no scaling info
on NT/Alpha Sybase 10.x at this time.  AIX is an outside consideration
(how large do the RS6000's scale??). We also assume that we will be
employing Repl server between multiple physical servers, or possibly
some type of clustering topology.  Navigation server does not seem
ready for prime time yet.  And no, we are not in a position to dump
Sybase and move to Oracle on a Vax cluster or something.

Initial performance modeling, lab testing, etc. is scheduled to begin
in 4Q.  
A: Did you say 1000 accounts, or 1000 active logins (connections)?
Have a look at the 'troubleshooting guide' where it talks about the
Sybase memory model.  As you will see, each connection incurs a
certain overhead, even if it is not active.  1000 active connections
is a studly quantity, even if people aren't doing any work.
All that aside, I have personally set up and managed a site with 200
connections.  On a SS10 with Sybase 4.9, so I think you will be able
to do this. (My transactiosn were shallow).  Just bring memory.  I
would initially use a SS20 w/256MB RAM.  One processor is OK, but two
if you can manage.  Oh, and this machine will do nothing else but
Sybase, right?
Guy Cole (KQ6J) * "Expert Plain And Fancy Bit Twiddling" *

A:  I was involved at XXXXX, with a time card entry system that would
allow all 5000 of the employees enter their own time cards at the same

The trick was NOT to have them in continous contact with the server.
They would login and retrieve information and log out, they would
login everytime they needed more info or to perform an update, or to
exit and quit. They were always loging in and out for every

We used a cross platform C/GUI development tool called XVT, it created
a very stupid client connected to the Sybase engine using DBlibs and
the RPC function calling Stored procedures on the Server to perform
most of the functions.  It worked VERY well.

Powerbuilder, while very slick, will NOT use the RPC method, while it
can call SP's it will force the Sybase server to parce the SQL and
not as a RPC call.

Randy Jordan

A:  I highly recommend you move to oracle, I do not believe that
Sybase System 10 can support 1000+ users, we have concurency
issues with only 20 people.

We have also kept an ongoing list of Sybase drawbacks:
We have both Oracle 7 and Sybase 10.02 in house. I will without a
doubt tell you to use Oracle.


Sybase has a limit of 16 tables per query. Yes, that is HARDCODED!
When you define Referential Integrity constraints on a table, you will
run into this limit. The reason is that Sybase will take each RI
constraint and add the appropriate table for that constraint to your
existing query. So if you have a table with 12+ RI constraints and
your query has 5 tables, you will hit this limit. What is worse is
Sybase sits there and stares at you for 2 minutes before returning a
message like "Unable it enforce Integrity" Not a very descriptive
message. We had to enforce RI through triggers, which is a real real
pain. Our schema has 430 tables in it which translates roughly to 500
pages of trigger code. If you wish to drop a column that is a foreign
key, you cannot just drop the column, you need to drop the WHOLE
table, then rebuild the table and rebuild ALL triggers, even those not
referenceing the table you dropped. LAME.

We have a query with 11 tables in it. Sybase's "Optimizer" uses an
exponential search method. What this means is as you add more tables
the time it takes to retrieve the query goes up exponentially! The
query with 11 tables on Oracle running under a 486/33 takes 4 seconds
to retrieve. The SAME query on Sybase under a 486/33 takes 4 minutes!
The SAME query on Sybase under HP9000 800 series machine with 256 megs
of ram takes 10 seconds, more time then Oracle takes on a 486/33!!!

Sybase has a hard coded limit of 1962 bytes per row. This is because
Sybase page sizes are 2k and a row is not allowed to span across the
row. We ran into trouble here in a couple of cases. Oracle had no
problem. Sybase uses page level locking. If you want to enforce row
level locking, which is what Oracle does already, in sybase you need
to define several  char(255) dummy columns until your row size
exceedes 1024 (half a page.) That IS THE ONLY WAY TO DO IT IN SYBASE.
This is only a partial solution as Sybase locks both the data page and
the index page. In the case of having a clustered index on a table,
data is stored in the order of the index. If you update an existing
row, one which is stored on the first page of an index, it is very
easy to obtain a full table lock. Sybase does an "update not in place"
which means that the row is deleted, then reinserted. When it is
deleted, the rows may be shifted down, moving other rows across pages,
which requires a lock on all pages affected. If more then
20% of the table is going to be locked, then Sybase will choose a
table lock instead. To get Sybase to do an "update in place", you must
meet the following criteria:

1) No update triggers.

If you want referential integrity, this is not a possibility.

2) No variable length columns.

No varchars!

3) You cannot update any column which has an index which is being
   used by the Optimizer.

4) You cannot allow NULL in any of updatable columns.

A where clause can not have more then 128 comparisions in Sybase. If
you are using powerbuilder and you have a datawindow that has 128+
updatable columns and you choose an update criteria of key and
updatable columns, again you hit a wall. No such problem in Oracle.

Tables and column names are case sensitive, under Sybase's default
sort order. Since all database objects are stored in database tables,
they are under the same rules as any other data. If you choose a sort
order that is not case sensitive you will experience a 30% performance
Database corruption is a common problem.

These are the problems we have encountered so far with Sybase. It has
been an exercise in patience and frustration.


Thanx to all for their responses, especially to Tyler for his detailed
(albeit pro-Oracle) reply.

Mon, 02 Mar 1998 03:00:00 GMT
 [ 1 post ] 

 Relevant Pages 

1. I've Made Over $1'000'000 on the Net!!

2. SQL Server and 500 000 000 records ?

3. Updating 15.000.000 rows table MS SQL 7.0

4. Job Offered: Programmer/Analyst II - $36,000-$54,000/year (UCLA)

5. Job Offered: Programmer / Analyst II $36,000-$54,000/year (UCLA)

6. How to improve speed of a 1.000.000 records grid population

7. Access Vs. FoxPro (1.000.000 records DB)

8. JOB OFFERED: PROGRAMMER/ANALYST II $36,000-$54,000/year

9. Designer 2,000--Developer 2,000/ Contract--Long Term/ Mo

10. Inserting 300.000.000 datasets ..

11. NJ - Princeton **************** ORACLE Database Visionary $90,000-$120,000

12. Oracle DBAs needed-U.S.-$50,000-$100,000-(Recruiter)

Powered by phpBB® Forum Software