Download State of The Company

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Best Practice for Performance
Edward M. Kwang
President
Ways to Improve Performance
• Reference to article on http://www.elliott.com/kb
and search on keyword “Performance”
– Client Server
– Hardware
– Operating System
– Database
– Elliott Application Software
Btrieve vs. SQL
• Btrieve
– Pro: Transaction
– Con: Report
• SQL
– Pro: Report
– Con: Transaction
How Btrieve Work
• It’s Record Manager
• Retrieve One Record At A Time
• Fast With Transaction Operation (Record
by Record) Since There’s Very Little
Overhead with Btrieve Engine
How SQL Work
• It’s Relational Database Engine
• Client Issue SQL Statement to Server
– Select Cus_No, Cus_Name, Cus_St From
ARCUSFIL Where Cus_St = ‘CA’
• Server Compile The Statement And Decide The
Best Way to Retrieve the Data
• The Data Is Retrieved And Send Back to Client
Performance Improvement
• The amount of improvement by using
SQL for report process range from a few
times to more than 10 times:
• It’s usually determined by three factors:
– Network speed:
• Less improvement with faster network
• I.e. Big gain if you use 10-BaseT and less
gain if you use Giga-Bit Ethernet
– Server speed:
• More improvement with faster server
– CPU, Memory and Hard Disk (RAID)
– The nature of the report
SQL Is Faster With Report
• In the Previous Example, SQL will
– scan through the table of ARCUSFIL and pick out
those customers that CUS_ST = ‘CA’. This is done
on the server
– return three columns of customers in ‘CA’ back to
client which reduce network traffic.
• Vs. Btrieve will
– Scan through the table of ARCUSFIL on the client
side
– Get entire record of customer, even though it only
need three columns
Btrieve Is Faster With Trx
• For Record by Record Type of Operation,
Btrieve is Faster Because It Does Not Have the
Added Layer to Slow It Down.
• Example of Transaction Processing:
– Order Entry
– Cash Receipt
– New A/P Trx Entry
– Inventory Trx Processing
Elliott And Crystal Report
• Elliott Work At Btrieve Level
(Transactional)
• The Future Direction of Elliott Report is
Crystal
• If We Use Crystal Report at The SQL
level, then we have
– “The Best of Both Worlds”
• Pervasive’s Slogan
Client Server
• Crystal Report Writer
– Use ODBC instead of Database Files
– It’s SQL vs. Btrieve
– Make no difference if
• DB is small
• key is readily available
• All Data Need to Return to Client for Processing
• Elliott
– Running Elliott on Workstation
– Running Elliott on Server
• Defer Process on Server
– Faster Report Processing
– Less Chance of Data Corruption for Posting
Hardware
• RAID-5
– At least 3 drives to give capacity of 2 drives.
• Three disk heads to retrieve data is faster than one disk
head.
– Typical Implementation is 5 drives.
• CPU
– CPU Speed
– Multiple CPUs
• Memory
– 1 Giga or higher is recommended
• Network Speed, 10 vs 100 vs 1000
– Switch vs. Hub
Operating System
• Use Same Hardware Configuration
– Windows NT vs. Netware 3.12
• Windows NT is about 50% faster
– Windows 2000 vs. Windows NT
• Windows 2000 is about 50% faster
• PSQL Support
– PSQL has better support on Windows NT/2000
• Test based on Netcellent in-house environment.
We have not tested Netware 4, 5 or 6 to make
fair comparison available.
Database
• PSQL 2000 is significantly faster than PSQL 7
or Btrieve V6.1x
• According to our initial test - PSQL 8 deploy
client side caching and result in about 50%
performance improvement with most reports.
• Database page size
– 1K page size was the default in Elliott.
– 4K is recommended (result in 50% - 100%
performance improvements because less disk IO).
– New Elliott databases created after 7.13.316 will
default to 4K.
– Use Rebuild utility to change from 1K to 4K.
Demo Rebuild Database
• Find article on http://www.elliott.com with
keyword “Performance”
• Find article on http://www.elliott.com/kb with
keyword “Rebuild”
• How do you know if you should do a rebuild?
– Use BUTIL –STAT ARCUSFIL.BTR to find page size
• Run Rebuild utility provided by Pervasive.
– Select Files
– Change to 4K Size in Options Window
– Start Rebuild
Elliott Application Software
• Purge Your Data
– Purge COP Posted Orders
– Purge I/M Inventory Trx Audit Trails
– Purge Distribution Files
– Purge AR and AP Open Item Files
• Do Not Purge These Files (Sales Analysis)
– COP History Trx File (CPHSTTRX)
– COP Invoice History Files
Posting in Elliott V7 vs. V6.x
• Elliott V6.7x use two phases posting in COP
– DOS support up to 30 files open simultaneously
– COP posting require more than 30 files
– Divided into two phases so each phase won’t exceed
30 files.
• Elliott V7 use one phase posting in COP
– Windows has no limitation on number of open files.
– Posting speed is about 50% faster than V6.x.
Case Study 1
• A customer in Omaha, NB upgrade from
Netware 3.12 to Win2000. They are
disappointed that there’s not much performance
improvement.
– Network: 10-BaseT
– Server: High End Windows 2000 Server
– Aging Report: 5 Minutes
– POST COP Invoices: 1 Hour 30 Minutes
• Solution: run reports and posting on server
– Aging Report: 10 Seconds
– Post COP Invoices: 4 Minutes
Performance Demo
• Run A/R Aging Report
– Run on Workstation
– Run on Windows 2000 Server
• Why we did not achieve 30 times factor?
– 10 Base-T vs. 100 Base-T
– Hardware (3K vs. 10K investment)
• What if we have 1000 Base-T network?
– Less performance improvement when run on
server
– Still benefit with more reliable data update
Case Study 2
• A customer in southern California
upgrade from Macola V6.70 to Elliott
V7.1
– Netware 3.12 Server -> Windows 2000
Server
– Database Page Size 1K -> 4K
– Old COP Posting – Average 1 hr or more
– New COP Posting (on server) – 1 min
Analysis
• Netware 3.12 vs. Windows 2000 – factor 2
• Elliott V6.70 vs. V7.1 – factor 1.5
• Page size 1K vs. 4K – factor 2
• Faster Hardware – factor 2
– Disk Array & 10K RPM
– CPU Speed (2 Giga vs. 512Mhz)
– Memory (1 Giga vs. 512 Meg)
• Posting on the server – factor 2 to 30
• Combined Factor 24 to 360
– Depend on where is the bottle neck
– Most of the time, bottle neck is the network speed
– When posting on the server, the bottle neck is shifted to
the server
• CPU, RAID-Controller, Hard Drive, Memory…etc.
Formulating A Strategy
• Use Windows 2000 Server
• Run Heavy Duty Jobs on Server Directly
–
–
–
–
Faster
Reliable & Less Chance of Data Corruption
Take Advantage of Defer Processing
Can’t do this with Netware
• Use ODBC with Crystal
– Performance with Client Server
– Ease of Use with Views
– Security
• Purge Your Elliott Data
Questions & Answers