Using the Janet Network performance test facilities
Last updated:
How members can use Jisc's network performance test servers to diagnose and troubleshoot their network issues.
Overview
Our test systems are hosted at two points of presence (PoPs) on the Janet backbone: those at our Slough data centre (DC) run at 10Gbit/s, while our newer facility at London is connected at up to 100Gbit/s.
All our servers support both IPv4 and IPv6, and all support 9000 MTU.
All Jisc network test facility servers and services run under the domain perf.ja.net.
Top tip: To obtain the best performance results, it is important to tune the system you are using to run your tests. There is useful Fasterdata guidance, in particular under the host tuning section, and for good end-to-end performance you should also take note of the network and Science DMZ sections.
iperf
The commonly used iperf tool is generally used to test available throughput from a client to a server, or from the server back to the client. The test defaults to using TCP.
There are two versions of iperf, developed and maintained separately, and we support both.
iperf2
This runs on its default port 5001. The client ships with many OSes, but the most recent version is available from sourceforge.net.
iperf3
This also runs on its default port 5201. You can download the client for a wide variety of OSes including mobile platforms. Download at iperf.fr.
To test against the Slough server you can test with either iperf2 or iperf3 to the server name iperf-slough-10g.perf.ja.net.
Example iperf2 test
The following shows iperf2 being run from another Jisc server to the iperf endpoint:
$ iperf –c iperf-slough-10g.perf.ja.net
------------------------------------------------------------
Client connecting to iperf-slough-10g.perf.ja.net, TCP port 5001
TCP window size: 11.1 MByte (default)
------------------------------------------------------------
[ 3] local 194.81.18.227 port 47990 connected with 194.81.18.231 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 11.5 GBytes 9.90 Gbits/sec
To test in the reverse path from the server to the client you can use the --reverse option; this is useful if you wish to test throughput into your site.
Whether a single stream test can fill a link depends on the operating system, the link capacity, the server hardware and the server (TCP) tuning. For most Linux cases, it should be possible to drive 10Gbit/s with a single iperf stream. If not, it is also possible to run multi-stream tests which should achieve greater performance, particularly if there is some packet loss on the path.
Top tip: We recommend using perfSONAR if you wish to measure throughput periodically over the long-term or run non-contending throughput tests.
We support iperf testing at rates over 10Gbit/s and up to 100Gbit/s at our London PoP, but this is currently only available on request to netperf@jisc.ac.uk. In this case, using multi-stream tests will likely be required to fill the higher capacity path.
Ethr
Ethr is an alternative network performance test tool, written in Go. We provide an endpoint at our Slough PoP running at ethr-slough-10g.perf.ja.net.
Distributions of ethr are available on GitHub, where the README provides instructions for Windows, Linux and macOS. Find out specific CentOS instalaltion information.
Example Ethr test
The syntax to run the test is similar to iperf, and full options can be seen with ethr –h
$ ethr -c ethr-slough-10g.perf.ja.net
Connecting to host [2001:630:3c:f803::12], port 9999
[ 6] local 2001:630:3c:f803::6 port 55644 connected to 2001:630:3c:f803::12 port 9999
- - - - - - - - - - - - - - - - - - - - - - -
[ ID] Protocol Interval Bits/s
[ 6] TCP 000-001 sec 9.55G
[ 6] TCP 001-002 sec 9.89G
[ 6] TCP 002-003 sec 9.67G
[ 6] TCP 003-004 sec 9.31G
[ 6] TCP 004-005 sec 9.88G
[ 6] TCP 005-006 sec 9.84G
[ 6] TCP 006-007 sec 9.78G
[ 6] TCP 007-008 sec 9.87G
[ 6] TCP 008-009 sec 9.88G
[ 6] TCP 009-010 sec 9.72G
Ethr done, duration: 10s.
The ethr output looks more like that of iperf3, with per-second reporting by default.
Note that as per iperf, our ethr server at Slough may be in use by other users when you test, so you may not see the maximum available capacity at any given time.
perfSONAR
An iperf or Ethr test only tests performance at a specific moment in time, just as a ping or traceroute test only shows you the current round-trip time, loss or path.
In contrast, perfSONAR provides a means to measure a range of network characteristics over time, to record them, and to use that history of measurements to better diagnose or troubleshoot performance issues. By default, perfSONAR runs latency, loss and path tests continuously and throughput tests every 6 hours.
Top tip: We recommend sites run at least one perfSONAR server. The software documentation discusses positioning, but this would typically be at your network edge, or alongside the main filestore you run data transfers to/from.
The perfSONAR software is open source and runs on a variety of Linux platforms. A new major version, 5.0, was released in April 2023. Read documentation about perfSONAR.
The software is usually run on a dedicated physical server, but you can use a VM or container.
View guidance on:
To minimise interference between tests you should use two separate network interfaces for the server, one for throughput tests, one for latency/loss tests.
There are multiple perfSONAR components that can be installed. The simplest is to install the tools and testpoint packages, such that your server can run tests and report them to another server that archives the results into OpenSearch. The ‘core’ package will additionally allow local archiving, while the full toolkit install will give all functionality. Read guidance on perfSONAR installation options.
perfSONAR servers running the toolkit will need a certificate. We recommend you use a certificate that is signed by an authority trusted by browsers, such as certificates issued by the Jisc certificate service or Let’s Encrypt, rather than self-signed.
Once installed, you can configure persistent throughput and latency/loss tests to our Jisc perfSONAR servers. You should run throughput tests to the throughput interface, and latency tests to the latency interface.
Slough DC, running at up to 10Gbit/s:
- Throughput interface - ps-slough-bw.perf.ja.net
- Latency interface - ps-slough-lat.perf.ja.net
London PoP, running at up to 100Gbit/s:
- Throughput interface - ps-london-bw.perf.ja.net
- Latency interface - ps-london-lat.perf.ja.net
perfSONAR uses a tool called pscheduler to ensure that throughput tests do not overlap or contend with other tests, so you should have the full capacity of the link available for your measurements.
You can also run third-party tests with pscheduler, between two remote perfSONAR servers, if they are configured (via the limits file) to be open, for example:
$ pscheduler task throughput --source HOST1 --dest HOST2
While tests can be configured to run persistently to/from any given perfSONAR server, it is also possible to create a ‘mesh’ of servers for a community where each tests against the other. One example is the UK World Large Hadron Collider Computing Grid (WLCG) community, GridPP, which has a number of meshes for IPv4 and IPv6 testing. Another is the WLCG 100G server mesh. Jisc can assist with mesh configurations if required. Other servers can be found via the perfSONAR lookup service.
Data Transfer Node (DTN) tests
Jisc hosts data transfer nodes (DTNs) for application-oriented testing. We can install and support any specific data transfer tools on request.
Currently our Slough DC has a Globus endpoint, running at dtn-slough-10g.perf.ja.net, which is connected at 10Gbit/s.
The basic Globus transfer tools can be run without a Globus licence/subscription.
A variety of files is available for testing Globus with globus-url-copy: 1M.dat, 2M.dat, 10M.dat, 50M.dat, 1G.dat, 10G.dat, 20G.dat, 100G.dat, 1000G.dat
You can copy to /dev/null or to the file system, for example, copying a 10GB file to /dev/null:
$ globus-url-copy -vb ftp://dtn-slough-10g.perf.ja.net:2811/space00/10G.dat /dev/null
There is also a directory with 100 x 1GB files for more sustained testing:
$ globus-url-copy -r -vb ftp://dtn-slough-10g.perf.ja.net:2811/space00/small/ file:///tmp/
We also have a 100G DTN at our London PoP. Please email netperf@jisc.ac.uk if you wish to test against this.
RIPE Atlas anchor
The RIPE Atlas project has grown to over 10,000 measurement devices worldwide, and now supports both physical (small form factor USB/Ethernet) and virtual clients. These can be used to run lightweight tests such as latency, loss, HTTP and DNS between clients and anchors.
Jisc runs a (physical) anchor at our Slough DC which has a web interface available at RIPE Atlas. At the time of writing there are some 6,000 tests being run against the anchor.
Tools in development
We are currently looking to deploy an HTTP-based ‘speedtest’ service, most likely using the open source librespeed package. There are other implementations available which we are also testing.
Such speedtests aren’t necessarily as accurate as the ones described above, but these are the types of tests that will most likely be run by users served by the Janet network, and in their home networks, because they are trivially easy to run from a browser without installing any local software. By providing a familiar tool we can also include pointers at the test pages towards the more advanced tools, and information describing the limitations of ‘speedtest’ servers.
We are also following the latest developments in the IP Performance Measurement working group at the Internet Engineering Task Force (IETF) on a new ‘responsiveness’ test. The idea of the test is to measure how many responses per second a server can reply with, which is an indication of the potential buffering delays or ‘buffer bloat’ in a network path.
Example new ‘responsiveness’ test
Our iperf2 server supports the first implementation of this (as of iperf 2.1.9), which you can test with the –bounceback option, for example:
$iperf -c iperf-slough-10g.perf.ja.net -i 1 --bounceback
The test reports RPS (responses per second), the higher the better.
Further information
You may also find the following links useful:
- Jisc advice and guidance for large scale data transfers over Janet
- Jisc Research Network Engineering community
- End-to-end performance tuning guidance
Contact us
For any queries about the Jisc test facility systems (on their use or their configuration) please contact our network performance team at netperf@jisc.ac.uk or you can email help@jisc.ac.uk to raise a ticket with our service desk.
If there are any other network test servers or services you would like us to host then please get in touch with us.