This shows Perl higher up the chart with it being 49% better (on average) than Ruby.
Some other things to note:
1. Perl has never been fast with binary trees. But Ruby 1.8 is even worse (I recall it being many times slower than Perl at this). So hats off to the Ruby 1.9 VM guys because they've turned things around and now it's even out pacing Lua on this benchmark! - http://benchmarksgame.alioth.debian.org/u32/performance.php?...
2. Fasta in Ruby maybe twice quicker than Perl however it uses over 100 times more memory! This is be because it's inlining some code (eval) giving it a huge speed bump at expense of a bigger program footprint. If I do a port of this to Perl then this fasta.pl runs 2.6 times faster than alioth's perl version (which now means the Ruby version is about 30% slower than it's direct equivalent Perl version).
3. My make_repeat_fasta idiomatic subroutine is actually a little slower than the one in fasta.pl on alioth. So both my fasta.pl & alioth's fasta.rb programs could be speeded up more :)
4. I may even be able to shave a little bit more off fasta.pl time. Same might be true for spectral norm & binary-tree however the Perl versions aren't showing up on alioth site at moment :( ... IIRC, when I last looked at the perl binary-trees code on alioth a couple of years ago I was able to shave 10% off.
On the contrary my dear igouy, specialisation can be key and knowing which tool in my toolset is best for different tasks is indeed insightful & helpful ;-)
re: contribute - Looking on my hard disk I see that I downloaded the bencher/shootout-scm back on 1st Jan 2010. However IIRC the process of contributing code back was a bit unwieldy. If some free tuits come my way then I may relook at it.
Looking at my keychain I see I have three logins for alioth... draegtun, draegtun-guest & draegtun_guest... all created on 1st Jan 2010.
So it looks like I had issues logging (back) onto Alioth at that time :(
Anyway resolved it now because I see that draegtun-guest does work for me :)
It's a little convoluted and the contribute notes don't initially match what you see at login but after rummaging around I found the required tracker and have now submitted the faster fasta.pl.
Frankly, i'm not convinced there's any point in investing time on improving the benchmarks there, considering the overall comparison directly contradicts the detailed comparison, by putting perl in a worse position than languages it outperforms.
Replying to myself because I want to add something interesting (to posterity) that I noticed today on Alioth:
Perl (or OS) was upgraded from 5.14.* to 5.16.2. From cursory glance this gave all the Perl benchmarks a little boost. For eg. My fasta is about 3 secs quicker and the "interesting alternative" fasta dropped below 2.0 barrier (now timed at 1.96).
However on the summary Perl slowed down a few points (here's the new bottom five on u32 single-core benchmark):
I think the drop is because the Perl pldigits benchmark is now failing. The Math::GMP module can't be found. Pretty sure this wasn't a core module so perhaps a Perl dependency has been removed in the OS (Debian).
PS. This maybe a temporary glitch so that dependency maybe restored soon. If not then I may amend pldigits benchmark accordingly.
PPS. I see that Python pldigits is using gmpy and is working fine. This means that GMP is installed (as is gmpy Python library) so it's just the Math::GMP perl module that's missing :(
> You don't seem to understand what the overall comparison shows.
I told you i don't. It looks entirely nonsensical. I asked you for clarification. So far your only response has been to parrot my saying that i don't understand why your data representations are dissonant.
> Are you familiar with descriptive statistics?
Possibly under another name, but i don't know what you mean when you say that.
> Quartiles?
In theory yes, i am unsure how you're applying it here, since we're not talking about binnable quantities.
>> I told you i don't. It looks entirely nonsensical. I asked you for clarification. So far your only response has been to parrot my saying that i don't understand why your data representations are dissonant. <<
It would have been better if you had said -- "English is not my primary languages and especially english maths are hard for me to grasp." -- instead of saying "it certainly seems deceptive".
You say you are familiar with box plots, so you should have no difficulty understanding that box plot shows - the Perl and Ruby programs have very similar performance when compared to the fastest programs.
"Visual Presentation of Data by Means of Box Plots"
Those two sentences are not a contradiction. I may not be good with reading english descriptions of math, but i am good with applied math. The calculations i did with your numbers disagree with what your graph showed. So to me the graph seems deceptive. There is no contradiction in this.
Further, if you show me the actual calculations done, i will understand it perfectly fine. Yet you refuse to do so. I do not understand why, and i hope you can understand how that makes me even more distrustful.
On the graph in the overview page Perl was shown to significantly outperform Ruby in a number of benchmarks, yes, i could see that. Yet the median of Perl was still set higher than the median of Ruby, which could possibly be explained by perl also being outperformed significantly in one benchmark, but which was not supported by the actual direct comparison numbers.
So i ask again: Please show me the actual calculations performed to arrive at the median values shown in the overview graph.
Okay, i worked it out, no thanks to yoo. Actually, i fucking worked it out IN SPITE of you. All your condescending hints and links and such were entirely bullshit and did not even remotely lead in the direction of explaining why the data seems dissonant. They were flat out orthogonal to the entire problem.
The important thing which you did not bother to point here even once is that the comparisons on the overview page are done against the fastest programs of all languages, thus weighting the results by a factor that is simply not present when one language is compared directly against another.
So, alright, the graphs do entirely make sense.
Would you be open to a patch that reworks the language vs. language comparison pages in such a manner as to make this relationship obvious?
For eg: Here is Which programs are best with those benchmarks removed: http://benchmarksgame.alioth.debian.org/u32/which-programs-a...
This shows Perl higher up the chart with it being 49% better (on average) than Ruby.
Some other things to note:
1. Perl has never been fast with binary trees. But Ruby 1.8 is even worse (I recall it being many times slower than Perl at this). So hats off to the Ruby 1.9 VM guys because they've turned things around and now it's even out pacing Lua on this benchmark! - http://benchmarksgame.alioth.debian.org/u32/performance.php?...
2. Fasta in Ruby maybe twice quicker than Perl however it uses over 100 times more memory! This is be because it's inlining some code (eval) giving it a huge speed bump at expense of a bigger program footprint. If I do a port of this to Perl then this fasta.pl runs 2.6 times faster than alioth's perl version (which now means the Ruby version is about 30% slower than it's direct equivalent Perl version).
3. My make_repeat_fasta idiomatic subroutine is actually a little slower than the one in fasta.pl on alioth. So both my fasta.pl & alioth's fasta.rb programs could be speeded up more :)
4. I may even be able to shave a little bit more off fasta.pl time. Same might be true for spectral norm & binary-tree however the Perl versions aren't showing up on alioth site at moment :( ... IIRC, when I last looked at the perl binary-trees code on alioth a couple of years ago I was able to shave 10% off.
ref: My perl port of fasta.rb on alioth - https://gist.github.com/4675254