CAVEAT
There are "Lies, damned lies, and statistics" but worse are probably performance measurements done by someone else. The real test is what does it mean for any given application and is performance "fit for purpose". Database-related performance measurements are particular murky. The shape of the data matters, the usage made of the data matters, all in ways that can wildly affect whether a system is for for purpose.
Treat these figures with care - they are given to compare the TDB bulker (to version 0.8.7) loader and the new one (version 0.8.8 and later). Even then, the new bulk loader is new, so it is subject to tweaking and tuning but hopefully just to improve performance, not worsen it.
- BSBM – Berlin SPARQL Benchmark
- COINS – Combined Online Information System from the UK Treasury.
- LUBM – Lehigh University Benchmark.
See also
http://esw.w3.org/RdfStoreBenchmarking.
The new bulk loader is faster by x2 or more depending on the characteristics of the data. As loads can take hours, this saving is very useful. It produces smaller databases and the databases are as good as or better in terms of performance than the ones produced by the current bulk loader.
The tests were run on a small local server, not tuned or provisioned for database work, just a machine that happened to be easily accessible.
- 8GB RAM
- 4 core Intel i5 760 @2.8Ghz
- Ubuntu 10.10 - ext4 filing system
- Disk: WD 2 TB - SATA-300 7200 rpm and buffer Size 64 MB
- Java version Sun/Oracle JDK 1.6.0_22 64-Bit Server VM
The BSBM published results from Nov 2009.
The figures here are produced using a modified version of the BSBM tools set used for version 2 of BSBM. The modifications are to run the tests on a local database, not over HTTP. The code is available from github. See also this article.
BSBM dataset | Triples | Loader 1 | Rate | Loader 2 | Rate |
---|---|---|---|---|---|
50k | 50,057 | 3s | 18,011 TPS | 7s | 7,151 TPS |
250k | 250,030 | 8s | 31,702 TPS | 11s | 22,730 TPS |
1m | 1,000,313 | 26s | 38,956 TPS | 27s | 37,049 TPS |
5m | 5,000,339 | 121s | 41,298 TPS | 112s | 44,646 TPS |
25m | 25,000,250 | 666s | 37,561 TPS | 586s | 42,663 TPS |
100m | 100,000,112 | 8,584s | 11,650 TPS | 3,141s | 31,837 TPS |
200m | 200,031,413 | 30,348s | 6,591 TPS | 8,309s | 24,074 TPS |
350m | 350,550,000 | 83,232s | 4,212 TPS | 21,146s | 16,578 TPS |
Database | Size/loader1 | Size/loader2 |
---|---|---|
50k | 10MB | 7.2MB |
250k | 49MB | 35MB |
1m | 198MB | 137MB |
5m | 996MB | 680MB |
25m | 4.9GB | 3.3GB |
100m | 20GB | 13GB |
200m | 39GB | 26GB |
350m | 67GB | 45GB |
Numbers are "query mix per hour"; larger numbers are better. The BSBM performance engine was run with 100 warmups and 100 timing runs over local databases.
Loader used | 50k | 250k | 1m | 5m | 25m | 100m | 200m | 350m |
---|---|---|---|---|---|---|---|---|
Loader 1 | 102389.1 | 87527.4 | 58441.6 | 5854.7 | 1798.4 | 673.0 | 410.7 | 250.0 |
Loader 2 | 106920.1 | 86726.1 | 62240.7 | 11384.5 | 3477.9 | 797.1 | 425.8 | 259.2 |
What this does show is that for a narrow range of database sizes around 5m to 25m, the databases produced by loader2 are faster. This happens because the majority ogf the working set of databases due to loader1 didn't fit mostly in-memory but those produced by loader2 do.
COINS is the Combined Online Information System from the UK Treasury. It's a real-wolrd database that has been converted to RDF by my colleague, Ian - see Description of the conversion to RDF done by Ian for data.gov.uk.
General information about COINS.
COINS is all named graphs.
COINS dataset | Quads | Loader 1 | Rate | Loader 2 | Rate |
---|---|---|---|---|---|
417,792,897 | 26,425s | 15,811 TPS | 17,057s | 24,494 TPS |
Size/loader1 | Size/loader2 |
---|---|
152GB | 77GB |
LUBM isn't a very representative benchmark for RDF and linked data applications - it is design more for testing inference. But there is some details of various systems published using this benchmark. To check the new loader on this data, I ran loads for a couple of the larger generated. These are the 1000 and 5000 datasets, with inference applied during data creation. The 5000 dataset, just under a billion triples, was only run through the new loader.
LUBM dataset | Triples | Loader 1 | Rate | Loader 2 | Rate |
---|---|---|---|---|---|
1000-inf | 190,792,744 | 7,106s | 26,849 TPS | 3,965s | 48,119 TPS |
5000-inf | 953,287,749 | N/A | N/A | 86,644s | 11,002 TPS |
Database sizes:
Dataset | Size/loader1 | Size/loader2 |
---|---|---|
1000-inf | 25GB | 16GB |
5000-inf | N/A | 80GB |
No comments:
Post a Comment