Efficient in processing a single message.
Processing high volumes concurrently.
Scaling out to servers of varying sizes.
Scaling down to low-spec devices.
The ultimate balance of speed and safety.
Benchmarks don't lie - Liars do benchmark
There are many parameters which effect measured performance. The most obvious is hardware - how many servers, how many CPU cores, how much memory, how much storage, how fast is the storage, how much redundancy, and the list goes on. Less obvious is what the system actually does - some message processing is trivial, while other messages may result in historical reports aggregated and sifting through terabytes of information.
Most benchmarks can be viewed as nothing more than anecdotes.
What others are saying
In this presentation, Dave de Florinier mentions achieving throughput of 600,000 messages per hour on version 1.7 or 1.8 of NServiceBus.
On the discussion group, Raymond Lewallen posted throughput of 1.8 million messages per hour on version 1.9 of NServiceBus.
The most detailed breakdown of NServiceBus performance was done on version 1.8 and can be found here. The short and sweet version is 100 million durable and transactional messages per hour and 900 million non-durable messages per hour on 3 blade centers (48 blades), 30 1U servers, and 20 clusters.
One of the areas of interest when evaluating a technology is how fast it can handle XML. NServiceBus has its own custom XML serializer which is capable of handling both classes and interfaces as well as dictionaries and does not use the WCF DataContractSerializer. Binary serialization is done using the standard .net binary serializer.
Below you can find a comparison between the NServiceBus XML serializer and the WCF DataContractSerializer in processing small messages with 5 levels of nesting. You can see that the NServiceBus performance is superior, at times even 40% faster.
Times measured are for 100 operations - so NServiceBus can serialize a single message in 0.7-0.8 ms, and deserialize it in roughly 1ms. Larger messages take longer.