This is the second part, presenting the initial test results of the State History using CTF as input.

Now with Jabbatrace's API done, it is now possible to recreate the complete set of state changes I had when using LTTng 0.x traces! Some small differences remain, but at least the quantity of data generated from the same events should be quite similar. For example, we don't have the exec() events anymore to set the "process name" attribute, but it gets set at the fork() instead.
Yes, yes, I know, it does not get updated since we don't see the exec(), so the data is not *right* per se. But remember that at this point I'm just testing the behavior of the state library, I don't really care if the data is right, I only care that there *is* data. ;)

One small note, I also upgraded my Eclipse to 3.8 and my OpenJDK to version 7, to have support for switch statement using strings. Event ID's are not constant anymore in LTTng 2.0, but event names are. It was MUCH more simple to do event type comparisons this way, rather than reading all the metadata at the start and building a conversion map. So the CTF input plugin now requires Java 7 ("Jabbatrace" and the rest of the State History can still run off 6 for now, it's just this plugin).

For the first time I am benchmarking using my CTF traces. Those are also bigger than my previous ones: the first one is around 50 MB then the rest is 1, 5, 12, 28, and 550 GB.
(For the biggest one, Gnome reports the directory to be 575 GB, but Java says the total sum of the files is 548540 MB. I don't know who to believe anymore...)
I've used a complete History Tree with default settings for the backend. Partial histories are still hard-coded to VSTF traces, so those are not available for now for CTF inputs.

State History Construction
Graph 1
Graph 2
Graph 3

Excuse the lack of short descriptions, but multi-line hyperlinks are not very classy...
The 1st graph shows the basic time taken to create the complete state history for each trace, vs. the "ctf-dummy" input, which simply reads the trace but does not create any kind of state system. Remember that simply "opening" a CTF trace takes some time since it gets indexed at the start. This indexing time is not taken in account in the first chart.

In the 2nd graph, I benchmarked only the indexing part (shown by the yellow line). I then reproduced the first two lines, offset by the indexing's amount. In this case this is not a direct measurement, but since indexing time is quite consistant it should be representative enough.

The 3rd graph is simply a log-log view of the previous one.

As far as construction goes, we do not seem to suffer any scalability problems. It does take *some* time (almost 10 hours to create the 550 gig's history from sratch) but most of that time is spent just reading the trace. Reading (not counting the indexing) averages at 30 MB/s, which is not quite saturating the hard drive but is reasonable. There might be room for small improvements here.

Also, remember that the indexing problem will be taken care of eventually. The plan is to change the trace index to be stored in a separate file, and be created at trace-time by the consumer itself.

 

Next comes the results of the queries tests. They are not quite as exciting...
Full queries (log axis)
Single queries (log axis)

As you can see, the 550 GB trace gets destroyed in query tests, especially single queries. It can probably be explained by looking at other metrics: the resulting History Tree is 614 levels deep! This metric is not very meaningful by itself (some weird places in the traces might create a very long branch, but the majority of the information could be well spread out in the tree). The "average node usage %" would tell us more, however the result for this tree is "-59%". Oops, integer overflow somewhere...

Interestingly, this means that there were more than 20 millions nodes in this tree! (20 millions * ~100 > Integer.MAX_VALUE of ~2 billions). The tree did seem to have degenerated in a list somewhere, but at least is still in working order, is not busting RAM, and is responding to queries albeit more slowly. This behavior happened on a smaller scale when using small block sizes ; after all I was still using the default of 32K block sizes which is probably not enough for a history sizes measured in hundreds of gigs.

I immediately pushed a fix for this % calculating problem, and am now rebuilding that tree to see what the real value was (yeah, I unfortunately deleted it because of limited disk space). Afterwards I will rebuild it with a much bigger block size (probably 1M-4M). This shouldn't affect construction time much (we still write the same amount of data, just in different places) but hopefully should help the query results of the bigger traces.

 

Update: By popular demand I've added both log and non-log versions of the queries results. The normal version shows the "scalable" aspect (or lack of) better, while the log X-axis allows seeing the results for smaller traces more clearly.
Check the next post for comparisons with bigger block sizes, with 1M blocks so far queries results are already better!