Factors limiting actual performance, criteria for real decisions
Most of the listed rates are theoretical maximum throughput measures; in practice, the actual effective throughput is almost inevitably lower in proportion to the load from other devices (network/bus contention), physical or temporal distances, and other overhead in data link layer protocols etc. The maximum goodput (for example, the file transfer rate) may be even lower due to higher layer protocol overhead and data packet retransmissions caused by line noise or interference such as crosstalk, or lost packets in congested intermediate network nodes. All protocols lose something, and the more robust ones that deal resiliently with very many failure situations tend to lose more maximum throughput to get higher total long-term rates.
Device interfaces where one bus transfers data via another will be limited to the throughput of the slowest interface, at best. For instance, SATA revision 3.0 (6 Gbit/s) controllers on one PCI Express 2.0 (5 Gbit/s) channel will be limited to the 5 Gbit/s rate and have to employ more channels to get around this problem. Early implementations of new protocols very often have this kind of problem. The physical phenomena on which the device relies (such as spinning platters in a hard drive) will also impose limits; for instance, no spinning platter shipping in 2009 saturates SATA revision 2.0 (3 Gbit/s), so moving from this 3 Gbit/s interface to USB 3.0 at 4.8 Gbit/s for one spinning drive will result in no increase in realized transfer rate.
Contention in a wireless or noisy spectrum, where the physical medium is entirely out of the control of those who specify the protocol, requires measures that also use up throughput. Wireless devices, BPL, and modems may produce a higher line rate or gross bit rate, due to error-correcting codes and other physical layer overhead. It is extremely common for throughput to be far less than half of theoretical maximum, though the more recent technologies (notably BPL) employ preemptive spectrum analysis to avoid this and so have much more potential to reach actual gigabit rates in practice than prior modems.
Another factor reducing throughput is deliberate policy decisions made by Internet service providers that are made for contractual, risk management, aggregation saturation, or marketing reasons. Examples are rate limiting, bandwidth throttling, and the assignment of IP addresses to groups. These practices tend to minimize the throughput available to every user, but maximize the number of users that can be supported on one backbone.
Furthermore, chips are often not available in order to implement the fastest rates. AMD, for instance, does not support the 32-bit HyperTransport interface on any CPU it has shipped as of the end of 2009. Additionally, WiMAX service providers in the US typically support only up to 4 Mbit/s as of the end of 2009.
Choosing service providers or interfaces based on theoretical maxima is unwise, especially for commercial needs. A good example is large scale data centers, which should be more concerned with price per port to support the interface, wattage and heat considerations, and total cost of the solution. Because some protocols such as SCSI and Ethernet now operate many orders of magnitude faster than when originally deployed, scalability of the interface is one major factor, as it prevents costly shifts to technologies that are not backward compatible. Underscoring this is the fact that these shifts often happen involuntarily or by surprise, especially when a vendor abandons support for a proprietary system.
Conventions
By convention, bus and network data rates are denoted either in bits per second – bit/s, kbit/s (103 bit/s), Mbit/s (106 bit/s), Gbit/s (109 bit/s), Tbit/s (1012 bit/s) – or bytes per second – B/s, kB/s (103 B/s), MB/s (106 B/s), GB/s (109 B/s), TB/s (1012 B/s). In general, parallel interfaces are quoted in B/s and serial in bit/s. The more commonly used is shown below in bold type.
On devices like modems, bytes may be more than 8 bits long because they may be individually padded out with additional start and stop bits; the figures below will reflect this. Where channels use line codes (such as Ethernet, Serial ATA, and PCI Express), quoted rates are for the decoded signal.
The figures below are simplex data rates, which may conflict with the duplex rates vendors sometimes use in promotional materials. Where two values are listed, the first value is the downstream rate and the second value is the upstream rate.
The figures below are grouped by network or bus type, then sorted within each group from lowest to highest bandwidth; gray shading indicates a lack of known implementations.
As stated above, all quoted bandwidths are for each direction. Therefore, for duplex interfaces (capable of simultaneous transmission both ways), the stated values are simplex (one way) speeds, rather than total upstream+downstream.
802.11 networks in infrastructure mode are half-duplex; all stations share the medium. In infrastructure or access point mode, all traffic has to pass through an Access Point (AP). Thus, two stations on the same access point that are communicating with each other must have each and every frame transmitted twice: from the sender to the access point, then from the access point to the receiver. This approximately halves the effective bandwidth.
802.11 networks in ad hoc mode are still half-duplex, but devices communicate directly rather than through an access point. In this mode all devices must be able to see each other, instead of only having to be able to see the access point.
x LPC protocol includes high overhead. While the gross data rate equals 33.3 million 4-bit-transfers per second (or 16.67 MB/s), the fastest transfer, firmware read, results in 15.63 MB/s. The next fastest bus cycle, 32-bit ISA-style DMA write, yields only 6.67 MB/s. Other transfers may be as low as 2 MB/s.[42]
y Uses 128b/130b encoding, meaning that about 1.54% of each transfer is used for error detection instead of carrying data between the hardware components at each end of the interface. For example, a single link PCIe 3.0 interface has an 8 Gbit/s transfer rate, yet its usable bandwidth is only about 7.88 Gbit/s.
z Uses 8b/10b encoding, meaning that 20% of each transfer is used by the interface instead of carrying data from between the hardware components at each end of the interface. For example, a single link PCIe 1.0 has a 2.5 Gbit/s transfer rate, yet its usable bandwidth is only 2 Gbit/s (250 MB/s).
w Uses PAM-4 encoding and a 256 bytes FLIT block, of which 14 bytes are FEC and CRC, meaning that 5.47% of total data rate is used for error detection and correction instead of carrying data. For example, a single link PCIe 6.0 interface has a 64 Gbit/s total transfer rate, yet its usable bandwidth is only 60.5 Gbit/s.
The table below shows values for PC memory module types.
These modules usually combine multiple chips on one circuit board.
SIMM modules connect to the computer via an 8-bit- or 32-bit-wide interface. RIMM modules used by RDRAM are 16-bit- or 32-bit-wide.[49]
DIMM modules connect to the computer via a 64-bit-wide interface.
Some other computer architectures use different modules with a different bus width.
In a single-channel configuration, only one module at a time can transfer information to the CPU.
In multi-channel configurations, multiple modules can transfer information to the CPU at the same time, in parallel.
FPM, EDO, SDR, and RDRAM memory was not commonly installed in a dual-channel configuration. DDR and DDR2 memory is usually installed in single- or dual-channel configuration. DDR3 memory is installed in single-, dual-, tri-, and quad-channel configurations.
Bit rates of multi-channel configurations are the product of the module bit-rate (given below) and the number of channels.
a The clock rate at which DRAM memory cells operate. The memory latency is largely determined by this rate. Note that until the introduction of DDR4 the internal clock rate saw relatively slow progress. DDR/DDR2/DDR3 memory uses 2n/4n/8n (respectively) prefetch buffer to provide higher throughput, while the internal memory speed remains similar to that of the previous generation.
b The memory speed or clock rate advertised by manufactures and suppliers usually refers to this rate (with 1 GT/s = 1 GHz). Note that modern types of memory use DDR bus with two transfers per clock.
Graphics processing units' RAM
RAM memory modules are also utilised by graphics processing units; however, memory modules for those differ somewhat from standard computer memory, particularly with lower power requirements, and are specialised to serve GPUs: for example, GDDR3 was fundamentally based on DDR2. Every graphics memory chip is directly connected to the GPU (point-to-point). The total GPU memory bus width varies with the number of memory chips and the number of lanes per chip. For example, GDDR5 specifies either 16 or 32 lanes per device (chip), while GDDR5X specifies 64 lanes per chip. Over the years, bus widths rose from 64-bit to 512-bit and beyond: e.g. HBM is 1024 bits wide.[50]
Because of this variability, graphics memory speeds are sometimes compared per pin. For direct comparison to the values for 64-bit modules shown above, video RAM is compared here in 64-lane lots, corresponding to two chips for those devices with 32-bit widths.
In 2012, high-end GPUs used 8 or even 12 chips with 32 lanes each, for a total memory bus width of 256 or 384 bits. Combined with a transfer rate per pin of 5 GT/s or more, such cards could reach 240 GB/s or more.
RAM frequencies used for a given chip technology vary greatly. Where single values are given below, they are examples from high-end cards.[51] Since many cards have more than one pair of chips, the total bandwidth is correspondingly higher. For example, high-end cards often have eight chips, each 32 bits wide, so the total bandwidth for such cards is four times the value given below.
Data rates given are from the video source (e.g., video card) to receiving device (e.g., monitor) only. Out of band and reverse signaling channels are not included.
^Morse can transport 26 alphabetic, 10 numeric and one interword gap plaintext symbols. Transmitting 37 different symbols requires 5.21 bits of information (25.21 = 37). A skilled operator encoding the benchmark "PARIS" plus an interword gap (equal to 31.26 bits) at 40 wpm is operating at an equivalence of 20.84 bit/s.
^WPM, or words per minute, is the number of times the word "PARIS" is transferred per minute. Strictly speaking the code is quinary, accounting inter-element, inter-letter, and inter-word gaps, yielding 50 binary elements (bits) per one word. Counting characters, including inter-word gaps, gives six characters per word or 240 characters per minute, and finally four characters per second.
^ abcdefghijAll modems are wrongly assumed to be in serial operation with 1 start bit, 8 data bits, no parity, and 1 stop bit (2 stop bits for 110-baud modems). Therefore, currently modems are wrongly calculated with transmission of 10 bits per 8-bit byte (11 bits for 110-baud modems). Although the serial port is nearly always used to connect a modem and has equivalent data rates, the protocols, modulations and error correction differ completely.
^ abc56K modems: V.90 and V.92 have just 5% overhead for the protocol signalling. The maximum capacity can only be achieved when the upstream (service provider) end of the connection is digital, i.e. a DS0 channel.
^Effective aggregate bandwidth for an ISDN installation is typically higher than the rates shown for a single channel due to the use of multiple channels. A basic rate interface (BRI) provides two "B" channels and one "D" channel. Each B channel provides 64 kbit/s bandwidth and the "D" channel carries signaling (call setup) information. B channels can be bonded to provide a 128 kbit/s data rate. Primary rate interfaces (PRI) vary depending on whether the region uses E1 (Europe, world) or T1 (North America) bearers. In E1 regions, the PRI carries 30 B-channels and one D-channel; in T1 regions the PRI carries 23 B-channels and one D-channel. The D-channel has different bandwidth on the two interfaces.
^ADSL connections will vary in throughput from 64 kbit/s to several Mbit/s depending on configuration. Most are commonly below 2 Mbit/s. Some ADSL and SDSL connections have a higher digital bandwidth than T1 but their rate is not guaranteed, and will drop when the system gets overloaded, whereas the T1 type connections are usually guaranteed and have no contention ratios.
^Satellite internet may have a high bandwidth but also has a high latency due to the distance between the modem, satellite and hub. One-way satellite connections exist where all the downstream traffic is handled by satellite and the upstream traffic by land-based connections such as 56K modems and ISDN.
^FireWire natively supports TCP/IP, and is often used at an alternative to Ethernet when connecting 2 nodes.[21]
^Data rate comparison between FW and Giganet shows that FW's lower overhead has nearly the same throughput as Giganet.[22]
^ abcPCIe 2.0 effectively doubles the bus standard's bandwidth from 2.5 GT/s to 5 GT/s
^ abcdefPCIe 3.0 increases the bandwidth from 5 GT/s to 8 GT/s and switches to 128b-130b encoding
^SCSI-1, SCSI-2 and SCSI-3 are signaling protocols and do not explicitly refer to a specific rate. Narrow SCSI exists using SCSI-1 and SCSI-2. Higher rates use SCSI-2 or later.
^Minimum overhead is 38 byte L1/L2, 14 byte AoE per 1024 byte user data
^Minimum overhead is 38 byte L1/L2, 20 byte IP, 20 byte TCP per 1460 byte user data
^ abcdefFibre Channel 1GFC, 2GFC, 4GFC use an 8b/10b encoding scheme. Fibre Channel 10GFC, which uses a 64B/66B encoding scheme, is not compatible with 1GFC, 2GFC and 4GFC, and is used only to interconnect switches.
^ abMinimum overhead is 38 byte L1/L2, 14 byte AoE per 8192 byte user data
^ abcMinimum overhead is 38 byte L1/L2, 20 byte IP, 20 byte TCP per 8960 byte user data
^TTY uses a Baudot code, not ASCII. This uses 5 bits per character instead of 8, plus one start and approx. 1.5 stop bits (7.5 total bits per character sent).
^Dave Haynie, designer of the Zorro III bus, states in this posting that Zorro III is an asynchronous bus and therefore does not have a classical MHz rating. A maximum theoretical MHz value may be derived by examining timing constraints detailed in the Zorro III technical specificationArchived 2012-07-16 at the Wayback Machine, which should yield about 37.5 MHz. No existing implementation performs to this level.
^Dave Haynie, designer of the Zorro III bus, claims in this posting that Zorro III has a max burst rate of 150 MB/s.