Download Accelerate EDB Postgres Advanced Server

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Tandem Computers wikipedia , lookup

Oracle Database wikipedia , lookup

Relational model wikipedia , lookup

Ingres (database) wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Database wikipedia , lookup

Serializability wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Database model wikipedia , lookup

Versant Object Database wikipedia , lookup

Concurrency control wikipedia , lookup

ContactPoint wikipedia , lookup

Clusterpoint wikipedia , lookup

PostgreSQL wikipedia , lookup

Transcript
Accelerate EDB Postgres Advanced Server
with HPE Persistent Memory on
HPE ProLiant servers
Technical white paper
Technical white paper
Contents
Executive summary................................................................................................................................................................................................................................................................................................................................3
Solution overview .....................................................................................................................................................................................................................................................................................................................................3
HPE Persistent Memory ..............................................................................................................................................................................................................................................................................................................3
Solution components............................................................................................................................................................................................................................................................................................................................4
Hardware...................................................................................................................................................................................................................................................................................................................................................4
Software .....................................................................................................................................................................................................................................................................................................................................................4
Best practices and configuration guidance ...............................................................................................................................................................................................................................................................4
Workload description............................................................................................................................................................................................................................................................................................................................5
Test results ....................................................................................................................................................................................................................................................................................................................................................5
Transaction throughput ..............................................................................................................................................................................................................................................................................................................5
Analysis and recommendations..................................................................................................................................................................................................................................................................................................8
Checkpoint impact on performance .................................................................................................................................................................................................................................................................................8
System memory impact on performance ....................................................................................................................................................................................................................................................................8
Removes the bottleneck of WAL write times...........................................................................................................................................................................................................................................................8
Improves throughput and resource utilization .......................................................................................................................................................................................................................................................8
Cost-effective solution as compared to SSDs..........................................................................................................................................................................................................................................................8
Summary ..........................................................................................................................................................................................................................................................................................................................................................9
Installation and configuration .......................................................................................................................................................................................................................................................................................................9
EDB Postgres Advanced Server...........................................................................................................................................................................................................................................................................................9
HammerDB..........................................................................................................................................................................................................................................................................................................................................11
Run test.........................................................................................................................................................................................................................................................................................................................................................12
Generate the initial database.............................................................................................................................................................................................................................................................................................. 12
Test procedure ................................................................................................................................................................................................................................................................................................................................14
Appendix A: Bill of materials ...................................................................................................................................................................................................................................................................................................... 15
Appendix B: Memory configuration .................................................................................................................................................................................................................................................................................... 16
Appendix C: EDB Postgres Advanced Server configuration parameters ......................................................................................................................................................................................... 16
Resources and additional links ................................................................................................................................................................................................................................................................................................ 17
Technical white paper
Page 3
Executive summary
Ever-increasing demands of database implementations for faster transaction processing speeds, scalable capacity, and increased flexibility
are essential to meet the needs of today’s business challenges. At the same time, enterprises are looking for cost-effective, open-architecture,
industry-standard solutions that don’t include vendor lock-in or carry the high price tag attached to proprietary solutions.
HPE ProLiant servers with the HPE 8GB NVDIMM, the first offering in the HPE Persistent Memory family, provide persistent storage with the
performance of DRAM. This provides new opportunities for acceleration, even over high-performance storage products such as SSDs.
PostgreSQL is a multi-model database management system that combines traditional relational capabilities with document and other NoSQL
capabilities. EDB Postgres Advanced Server is based on open source PostgreSQL, the world's most advanced open-source database
management system, and provides additional capabilities such as enterprise security, advanced performance, database compatibility with
Oracle, and DBA and developer features for enhanced productivity.
This white paper demonstrates the ease of configuring EnterpriseDB’s EDB Postgres Advanced Server 9.5 database to use HPE 8GB NVDIMMs
on HPE ProLiant DL360 and DL380 Gen9 E5-2600 v4 servers to significantly improve transaction throughput and reduce solution cost.
Target audience: This white paper is intended for IT professionals and database administrators who use, program, manage, or administer large
databases that require high availability and high performance.
Solution overview
HPE Persistent Memory
To gain a real competitive advantage, you need to enable faster business decisions. The HPE 8GB NVDIMM delivers outstanding performance to
put data to work more quickly in your business. HPE Persistent Memory offerings are not just new hardware technology, but a complete software
ecosystem designed to work with today’s applications and workloads, including databases and analytics workloads.
The HPE 8GB NVDIMM is the first offering in the HPE Persistent Memory product category. It delivers the performance of memory with the
resiliency you have come to expect from HPE storage technology. Customers can have confidence that business-critical data is safe because
HPE utilizes higher endurance DRAM and components that help verify data is moved to non-volatile technology in the event of a power loss.
Figure 1. HPE 8GB NVDIMM
HPE Persistent Memory offers the following:
Turbo-charged performance delivering faster transaction performance
HPE ProLiant DL360 Gen9 and HPE ProLiant DL380 Gen9 servers equipped with HPE 8GB NVDIMMs increase performance for write-intensive
workloads. The transaction logging component of the online transaction processing (OLTP) workload, due to its write-intensive nature, is ideal
for acceleration with HPE 8GB NVDIMMs. The lower latency and higher bandwidth of the HPE 8GB NVDIMM enable significantly higher rates of
transaction log writes, resulting in increased overall transaction throughput.
Technology designed to make your business data resilient
Upon a power loss or cold reset, the HPE 8GB NVDIMM copies data from DRAM to flash using power provided by the HPE Smart Storage Battery.
Technical white paper
Page 4
Solutions designed around your business workloads
HPE Persistent Memory is designed around industry applications and workloads to deliver the performance of memory with the persistence of
storage. A complete hardware and software ecosystem provides a comprehensive HPE Persistent Memory solution for your business.
With HPE 8GB NVDIMMs in the server, the EDB Postgres Advanced Server database can be configured to store its transaction logs in the HPE 8GB
NVDIMMs, benefiting from lower latency and higher bandwidth. We have tested multiple configurations described below and analyzed the results to
demonstrate the benefits of using HPE 8GB NVDIMMs in HPE ProLiant servers for the EDB Postgres Advanced Server database solution.
Solution components
Hardware
Four hardware configurations were tested. Each hardware configuration included
• HPE ProLiant DL380 Gen9 server
• Two 12-core Intel® Xeon® E5-2650 v4 processors at 2.20 GHz
The NVDIMM+SSD hardware configurations included
• 32 GiB memory (two 16 GiB regular RDIMMs) or 16 GiB memory (two 8 GiB regular RDIMMs)
• Three HPE 8GB NVDIMMs configured as a single pmem block device with NVDIMM-N interleaving enabled, using the ext4 filesystem for the
database transaction log
• Seven 800 GB SAS SSDs behind an HPE Smart Array P840 controller configured as a RAID 5 logical drive, using the ext4 filesystem for
database tables
The SSD-only hardware configurations included
• 32 GiB memory (two 16 GiB regular RDIMMs) or 16 GiB memory (two 8 GiB regular RDIMMs)
• Two 800 GB SAS SSDs behind an HPE Smart Array P840 controller configured as a RAID 1 logical drive, using the ext4 filesystem for the
database transaction log
• Seven 800 GB SAS SSDs behind an HPE Smart Array P840 controller configured as a RAID 5 logical drive, using the ext4 filesystem for
database tables
Software
• Red Hat® Enterprise Linux® 7.3
• EnterpriseDB’s EDB Postgres Advanced Server 9.5
Best practices and configuration guidance
System utilities RBSU configuration
• Hyper-threading—Enabled
• Intel® Turbo Boost—Enabled
• NVDIMM-N Interleaving—Enabled
• HPE Power Profile—Maximum performance
HPE 8GB NVDIMM configuration
• The HPE 8GB NVDIMMs must be configured according to the population rules outlined in the QuickSpecs and the HPE 8GB NVDIMM User
Guide for HPE ProLiant Gen9 servers. See Appendix B for the configuration used for this testing.
• Use HPE Persistent Memory Manager for Linux 64-bit to evaluate and monitor your HPE 8GB NVDIMM configuration and related
components. Use HPE Persistent Memory Manager User Guide to learn more.
• Balance the total memory capacity across all processors.
• Only RDIMMs can be mixed with HPE 8GB NVDIMMs. No other memory types may be used when HPE 8GB NVDIMMs are present.
Technical white paper
Page 5
Workload description
HammerDB, an open-source database load testing and benchmarking tool available from hammerdb.com, implements an online transaction
processing (OLTP) type workload. The tool sends requests for new orders, payment, delivery, order status, stock level transactions, and
rollbacks from multiple users simultaneously. The database is expected to comply with Atomicity, Consistency, Isolation, and Durability
(ACID) properties to ensure data integrity by recording all transactions in a transaction log while processing the transaction requests.
HPE ran the OLTP test on a 1 TB database that was created with 5000 HammerDB warehouses. After the database was installed and
configured, the transactions were recorded at different connection levels to identify the suitable number of connections for the database
configuration. HammerDB reports New Order Per Minutes (NOPMs) as the key metric for comparison between database performance. NOPM
measures the number of new orders successfully entered per minute during the test run by all the connected clients while the system is
processing other concurrent requests from these clients.
We ran tests on each of the four hardware configurations. In NVDIMM+SSD hardware configurations, the PostgreSQL Write-Ahead Logging (WAL)
log was placed on the pmem block device created from HPE 8GB NVDIMMs. In SSD-only hardware configurations, the WAL log was placed on the
logical drive created from SAS SSDs. In 32 GiB system memory hardware configurations, shared_buffer was set to 12 GiB (i.e., 40% of system
memory). In 16 GiB system memory hardware configuration, shared_buffer was set to 6 GiB (i.e., 40% of system memory).
Test results
Transaction throughput
Figure 2 shows the NOPM results with the WAL log on a logical drive created from SAS SSDs vs. on a pmem block device created from HPE 8GB
NVDIMMs, with different system memory capacities and WAL log sizes.
Figure 2. Test results
Technical white paper
Page 6
Figure 2 shows that HPE 8GB NVDIMMs provided higher transaction throughput than SAS SSDs in all configurations. This is because lower
latencies and higher bandwidth of the HPE 8GB NVDIMMs enable significantly higher rates of WAL log writes, resulting in increased overall
transaction throughput.
WAL I/O throughput
We measured WAL I/O statistics during test runs using the Linux iostat utility. By default, iostat is not enabled for the pmem block devices. The
following shell command enables iostat for the pmem0 device:
sudo bash -c “echo 1 > /sys/block/pmem0/queue/iostats"
Enabling iostat impacts performance and is not appropriate for production systems. The actual performance will be higher than what is
presented here.
Mounting the ext4 filesystem with the –o dax option produces the best results but prevents iostat data from being collected. However, the
performance without the –o dax still produces higher throughput than WAL on a logical drive made from SAS SSDs and is sufficient for this
analysis. All data presented here is based on results without the –o dax option.
The following command captures iostat samples every second for a duration of 900 seconds during a test run:
iostat –x –c –m –d –t /dev/pmem0 1 900 > wal.iostat
Figure 3, Figure 4, and Figure 5 show only 5 minutes of 15 minutes of collected data. This is done for better visualization; the same patterns are
seen in the remaining data.
Figure 3 compares WAL I/O throughput between NVDIMM+SSD and SSD-only configurations.
Figure 3. I/O throughput to WAL logs
Figure 3 shows that I/O throughput to the WAL logs on the pmem block device is significantly higher than the I/O throughput to WAL logs on
the logical drive created from SAS SSDs. This results in significantly higher NOPM.
Technical white paper
Page 7
WAL I/O utilization
Figure 4 compares the WAL I/O utilization between NVDIMM+SSD and SSD-only configurations.
Figure 4. I/O utilization for accesses to WAL logs
Figure 4 shows the WAL I/O utilization for the logical drive created from SAS SSDs reaches as high as 90%. The I/O utilization drops significantly
lower, ranging between 5% and 10%, when the WAL is on the pmem block device.
Application CPU utilization
Application CPU utilization also increases when WAL is moved to the pmem block device for all tested configurations. Higher CPU utilization
is achieved due to the removal of a bottleneck for transaction logs. Figure 5 compares application CPU utilization between NVDIMM+SSD and
SSD-only configurations, using the test configuration with 16 GiB WAL and 12 GiB shared_buffer.
Figure 5. Application CPU utilization
Technical white paper
Page 8
Analysis and recommendations
The testing demonstrates the performance benefits of using the HPE 8GB NVDIMM with the EDB Postgres Advanced Server database.
HPE NVDIMMs can be easily configured to contain EDB Postgres Advanced Server WAL logs. The primary goal of our testing was to
demonstrate relative performance gains achieved through moving WAL space to the pmem block device created from HPE 8GB NVDIMMs
compared to the logical drive created from SAS SSDs. We recommend customers analyze their workload and tune the database accordingly.
As WAL files can grow beyond the maximum set in the postgresql.conf file, we recommend that you configure sufficient storage capacity for the
maximum WAL file size. For the 16 GiB WAL configuration, we have seen the WAL files grow to 19 GiB before they fall back to 16 GiB. This is
because while a checkpoint is triggered when the maximum WAL size is reached, transactions will continue to be processed and logs will be
generated until the checkpoint process is able to catch up with reducing log files.
Checkpoint impact on performance
A checkpoint is triggered when the checkpoint time-out expires or the maximum WAL size is exceeded. This causes significant demand on the
primary storage where the tables are stored as the transaction logs are merged with the tables. This additional demand on the primary storage
can cause OLTP performance benchmarking to slow down, resulting in a slower NOPM number. We recommend that customers analyze peak
demand and the amount of WAL data that might generate during the peak workload and set the WAL size accordingly in order to reduce
checkpoint triggers during peak workload.
System memory impact on performance
The WAL buffer used by Postgres is a percentage of the shared_buffer allocated for the program. The WAL buffer size can also be set in the
configuration file separately, but we did not test with this option to determine the optimum WAL buffer size. We have used 40% of system memory
for shared_buffer for all configurations we have tested. A larger system memory configuration will lead to a larger shared_buffer and a larger WAL
buffer. We measured performance with both a small and large shared_buffer to understand what impact an increased WAL buffer has on
performance. Even though WAL logs on the pmem block device created from HPE 8GB NVDIMMs produces higher transaction throughput for both
small and large shared_buffer compared to SSD WAL, we see a higher performance gain with the larger shared_buffer allocation in general.
The following advantages can be achieved with this solution:
Removes the bottleneck of WAL write times
The iostat graphs shown in Figure 3 show that the WAL on the pmem block device created from HPE 8GB NVDIMMs has higher I/O throughput
than the logical drive created from SAS SSDs due to the lower latency and higher bandwidth of the pmem block device. This is indicated by the
increased CPU utilization when WAL is moved to the pmem block device. This results in a NOPM counter increase of 9% to 24% in the HammerDB
measurement as shown in Figure 2.
Improves throughput and resource utilization
Higher transactions per minute were achieved when using the pmem block device created from HPE 8GB NVDIMMs as compared to the logical
drive created from SAS SSDs. The I/O utilization for the WAL log also drops significantly when the WAL log is moved to the pmem block device.
This allows room for growth if other bottlenecks are removed. We also see overall system CPU utilization increase when the WAL log is moved to
the pmem block device.
Cost-effective solution as compared to SSDs
HPE 8GB NVDIMM can be a cost-effective means of achieving better performance. For this testing, using three HPE 8GB NVDIMMs was two
times more cost-effective than using two SSD drives with an example price of $3050 per drive and $899 per HPE 8GB NVDIMM. 1 This results in
a lower overall solution cost as well as a lower transaction cost.
1
3 x $899 = $2697 and 2 x $3050 = $6100. $2697 is two times more cost-effective than $6100
Technical white paper
Page 9
Summary
The availability of HPE 8GB NVDIMM significantly improves EnterpriseDB’s EDB Postgres Advanced Server database performance by combining the
speed of memory with the data persistence of storage. The benefits include increased transaction throughput and reduced solution cost. In addition,
the cost of HPE 8GB NVDIMMs make them an attractive solution as compared to SSDs. Future improvements to EBD Postgres Advanced Server to
use direct byte addressable access for the HPE Persistent Memory will remove additional latency from transaction log processing and should result
in increased transaction throughput. This will also reveal where additional bottlenecks exist and further improvement is possible.
The following HPE portals present the HPE Persistent Memory solutions and use scenarios with various enterprise operating system
environments:
HPE Persistent Memory
hpe.com/info/persistentmemory
HPE Persistent Memory by HPE Technology on YouTube
youtube.com/watch?v=BKA_SOPqHfg
HPE on Twitter
twitter.com/hpe
HPE Persistent Memory for HPE ProLiant Servers: The performance of memory with the persistence of storage
hpe.com/h20195/v2/GetPDF.aspx/4AA6-4680ENW.pdf
Introducing Persistent Memory from Hewlett Packard Enterprise by HPE Discover
youtube.com/watch?v=vMrzXOBSeqA
Installation and configuration
EDB Postgres Advanced Server
EnterpriseDB is the leading worldwide provider of enterprise-class products and services based on PostgreSQL, the world’s most advanced and
independent open-source database.
EDB Postgres Advanced Server provides the most popular enterprise-class features found in the leading proprietary products but at a
dramatically lower total cost of ownership across transaction-intensive as well as read-intensive applications. Advanced Server also enables
seamless migrations from Oracle that save up to 90%, reducing the database spend by as much as 80%.
EDB Postgres Advanced Server installation
Download EDB Postgres Advanced Server by following the instructions from enterprisedb.com/products-services-training/products/postgresplus-advanced-server. You will have to log on using your EDB credentials. Follow the steps below to install and configure the software.
1. Download pmasmeta-9.5.0.0
2. Uncompress the tar.gz to a suitable location
3. Run the pmasmeta-9.5.0.5-linux-x64.run executable
4. Select the language of your choice on the Language and click Next
5. Select the I Accept Agreement on the License screen and click Next
6. Select the location where you want the binaries to be installed on the Installation Directory screen and click Next
7. Accept all the default component on the Select Component screen and click Next
8. Specify data and WAL location on the Additional Directories screen. You should specify two separate mount points for the data and WAL
directories and click Next. The location you specify for the directories must be empty
Technical white paper
Page 10
Figure 6. EDB Postgres Advanced Server Setup screen
9. Select Compatible with PostgreSQL for configuration mode on the next screen and click Next
10. Specify the Database Superuser (postgres) password on the Password screen and click Next
11. Click Next on the Additional Configuration screen and click Next
12. Select Dedicated on the Dynatune Dynamic Tuning: Server utilization screen and click Next
13. Select Transaction Processing on the Dynatune Dynamic Tuning: Workload profile screen and click Next
14. Click Next on the Advanced Configuration screen
15. Click Next on the Pre install summary screen
16. Click Next on Ready to Install screen
For additional details on EnterpriseDB’s EDB Postgres Advanced Server installation, use the installation guide found at
get.enterprisedb.com/docs/EDB_Postgres_Advanced_Server_Installation_Guide_v9.5.pdf.
EDB Postgres Advanced Server configuration
Make a few changes to the postgresql.conf file and pg_hba.conf file to meet the needs of our test environment. Open the postgresql.conf file with
a text editor and modify the following parameters:
listen_address = ‘*’
max_connections = ‘500’
Make changes to the pg_hba.conf if HammerDB is installed on a remote machine to allow connection from remote servers. Insert the following
line at the end of the file and save it:
host
all
all
[IP address of the remote host/netmask]
trust
EDB Postgres Advanced Server installs a graphical interface that allows users to configure various database configuration parameters. You can
access the graphical tools through the Applications->Postgres Advanced Server 9.5 links in the GNOME desktop as shown in Figure 7.
Figure 7. EDB Postgres Advanced Server graphical interface on GNOME desktop
Technical white paper
Page 11
HammerDB
HammerDB is an open-source database testing and benchmarking software. It supports many databases such as PostgreSQL, MySQL, MS SQL,
and so on. The software is easy to install and provides a graphical user interface. It includes a built-in workload generator based on industry
standards. The software can be installed on the same server where the database is running or on a separate machine connected over a network
to the server under test.
HammerDB installation
Download HammerDB tool from the hammerdb.com/download.html and download the Linux-64 bit version. Once downloaded, follow the
following instructions to install and configure the software:
1. Open a terminal where HammerDB-2.20-Linux-x86-64-Install file is downloaded
2. Run chmod +x HammerDB-2.20-Linux-x86-64-Install
3. Run ./HammerDB-2.20-Linux-x86-64-Install
4. Follow the prompt and wait for the installation to complete
Following is a screen shot of the installation of HammerDB.
Figure 8. HammerDB installation complete
5. Run cd /opt/HammerDB to change directory to where HammerDB is installed
6. Run export DISPLAY= 0:0
7. Export LD_LIBRARY_PATH=/opt/PostgresPlus/9.5AS/lib
8. Run ./hammerdb.tcl
The HammerDB graphical user interface will launch the following window.
Figure 9. HammerDB graphical user interface
Technical white paper
9. Double-click on PostgreSQL on the Benchmark pane and select TPC-C on the Benchmark Options window. Click OK.
Figure 10. PostgreSQL benchmark selection
Run test
Generate the initial database
We used a 1 TB initial database to perform our test and used the following steps to generate the initial database with HammerDB:
1. Launch HammerDB and select the PostgreSQL from the left Benchmark pane
2. Expand TPC-C, Schema Build and then double-click Options
Figure 11. Schema Build options
Page 12
Technical white paper
Page 13
Specify the PostgreSQL Host and the PostgreSQL Port of the server where EnterpriseDB Postgres Advanced Server is installed. If HammerDB is
installed on the same server as the database, leave the value for PostgreSQL as localhost. You can leave the Superuser, Superuser Password,
and Default Database values as default unless you have specified non-default values at installation. The next three fields (PostgreSQL User,
Password, and Database) are for the test database that HammerDB will create.
3. Change the sliding bar for Number of Warehouses all the way to the right to select 5000
4. Move the sliding bar to the right to select 20 for the Virtual Users to Build Schema
A higher number of Virtual Users to Build Schema will generate the database faster but must not exceed the total number of available CPU
cores on the system where the database is installed. We have seen good performance with 30 to 40 on a system with 48 virtual CPUs (cores +
hyper-threading enabled).
5. Click OK to set the schema build parameters
6. Double-click on Build menu from the Schema Build list items and the tool will launch specified number of virtual users to generate the
database
You will see the following screen when the launch is successful. The top right pane will show the detailed activities of each users. The bottom
pane will have the status column on the right side to indicate the current overall status for each user. Upon successful completion, a green check
mark will be shown on the status column and failure will be indicated by a red X.
Figure 12. Schema Build executing
The database generation takes several hours to complete. If the database generation fails for any reason once the process started successfully,
you will have to manually delete the HammerDB generated Database, User, and Password (tpcc) using the command line tool.
Technical white paper
Page 14
The following screen shows a successful creation of the database.
Figure 13. Schema Build complete
Once the database build is complete, stop the database server and make a copy of the initial database on a backup volume in order to save time
to get to the initial state of the test process. We will use this initial database to run different test scenarios for comparison.
Test procedure
We ran a HammerDB test on the initial 1 TB database residing on a logical drive created from SSDs and transaction log residing on a different
logical drive created from SSDs. We then ran the same test with a database residing on the logical drive created from SSDs and transaction log
residing on a pmem block device. We had reset the database to its initial size by copying the original database from the backup before we ran the
test with the transaction log on the pmem block device. We compared the test results between these two test configurations where the only
difference was where the transaction log resided. Use the following steps to start with the same database for a test run:
1. Stop PostgreSQL by executing the following:
su postgres
cd /opt/PostgresPlus/95AS/bin
./pg_ctl –D /mnt/data stop
exit
2. Copy the initial database from the backup location to the data directory of PostgreSQL
cp –r /mnt/backup/data /mnt/data
cp –r /mnt/backup/wal /mnt/wal
3. Change the ownership of the directories to the PostgreSQL Superuser
chown –R postgres /mnt/data
chown –R postgres /mnt/wal
4. Edit the postgresql.conf file located at /mt/data directory and apply the changes listed in Appendix C. Start PostgreSQL
su postgres
cd /opt/PostgresPlus/95AS/bin
./pg_ctl –D /mnt/data start
exit
Technical white paper
Page 15
We used several connection counts for our tests, and found the optimal number to be 40 connections for the hardware configuration. We
collected a 15-minutes sample with 1 minute of ramp time on a time-based test run. We manually edited the script to set the all warehouses to
true before starting the test run.
Use the following steps to apply the HammerDB changes:
1. Expand Driver Script and then double click Options
2. Select Time Test Driver Script option
3. Un-check Keying and Thinking Time option
4. Check the Vacuum and Checkpoint when complete option
5. Enter 1 in the Minutes for Rampup Time text box
6. Enter 15 in the Minutes for Test Duration text box and click OK
7. Double click Load from the Driver and Script tree
8. In the Edit Script screen, manually edit the script to enable all warehouses by following the instructions available in
hammerdb.com/hammerdb_modify_tpcc.pdf
9. Expand Virtual Users and then double click Options
10. Enter 41 in the Virtual Users text box
11. Select Show Output checkbox and click OK
12. Double click the Run option in the Virtual Users tree. The test run will begin
Appendix A: Bill of materials
Note
The bill of materials does not include complete support options or rack and power requirements. For questions regarding ordering, please
consult with your HPE reseller or HPE sales representative for more details. hpe.com/us/en/services/consulting.html
Table 1. Bill of materials
Quantity
Part number
Description
1
767032-B21
HPE DL380 Gen9 24SFF CTO Server
1
817943-L21
HPE DL380 Gen9 E5-2650v4 FIO Kit
1
817943-B21
HPE DL380 Gen9 E5-2650v4 Kit
2
726719-B21
HPE 16GB 2Rx4 PC4-2133P-R Kit
3
782692-B21
HPE 8GB NVDIMM Single Rank x4 DDR4-2133 Module
1
802582-B21
HPE 400GB 12G SAS WI 2.5in SC SSD
9
802584-B21
HPE 800GB 12G SAS WI 2.5in SSD
1
749974-B21
HPE Smart Array P440ar/2G FIO Controller
1
779799-B21
HPE Ethernet 10Gb 2P 546FLR-SFP+ Adptr
2
720478-B21
HPE 500W FS Plat Ht Plg Pwr Supply Kit
DL380 Gen9 server
Technical white paper
Page 16
Appendix B: Memory configuration
Figure 14 shows the memory configuration for the HPE DL380 Gen9 server, including which slots were populated with HPE 8GB NVDIMMs and
which ones had RDIMMs. For the memory population rules, see the HPE 8GB NVDIMM user guide.
Figure 14. Memory configuration
Appendix C: EDB Postgres Advanced Server configuration parameters
Configuration parameters modified in postgresql.conf file
shared_buffers = 12GB
max_files_per_process = 5000
effective_io_concurrency = 256
wal_sync_method = open_sync
checkpoint_timeout = 45min
checkpoint_completion_target = 0.9
random_page_cost = 1.1
max_wal_size = 8GB
Technical white paper
Resources and additional links
HPE ProLiant DL380 Gen9 server
hpe.com/servers/dl380
HPE Reference Architectures
hpe.com/info/ra
HPE Technology Consulting Services
hpe.com/us/en/services/consulting.html
To help us improve our documents, please provide feedback at hpe.com/contact/feedback.
Sign up for updates
© Copyright 2016–2017 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without
notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard
Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Intel Xeon and Intel are trademarks of Intel Corporation in the U.S. and other countries. Oracle is a registered trademark of Oracle and/or
its affiliates. Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries. Linux is the registered trademark
of Linus Torvalds in the U.S. and other countries. All other third-party trademark(s) is/are property of their respective owner(s).
4AA6-8881ENW, January 2017, Rev. 1