The good news is that it doesn't have to suck, if you build it out properly.

When wireless networking based around the 802.11b standard first hit consumer markets in the late nineties, it looked pretty good on paper. Promising "11 Mbps" compared to original wired Ethernet's 10 Mbps, a reasonable person might have thought 802.11b was actually faster than 10Mbps wired Ethernet connections. It was a while before I was exposed to wireless networking—smartphones weren't a thing yet, and laptops were still hideously expensive, underpowered, and overweight. I was already rocking Fast Ethernet (100 Mbps) wired networks in all my clients' offices and my own house, so the idea of cutting my speed by 90 percent really didn't appeal.

In the early 2000s, things started to change. Laptops got smaller, lighter, and cheaper—and they had Wi-Fi built in right from the factory. Small businesses started eyeballing the "11Mbps" that 802.11b promised and deciding that 10Mbps had been enough for them in their last building, so why not just go wireless in the new one? My first real exposure to Wi-Fi was in dealing with the aftermath of that decision, and it didn't make for a good first impression. Turns out that "11Mbps" was the maximum physical layer bit rate, not a speed at which you could ever expect your actual data to flow from one machine to another. In practice, it wasn't a whole lot better than dial-up Internet—in speed or reliability. In real life, if you had your devices close enough to each other and to the access point, about the best you could reasonably expect was 1 Mbps—about 125 KB/sec. It only got worse from there—if you had ten PCs all trying to access a server, you could cut that 125 KB/sec down to 12.5 KB/sec for each one of them.

Just as everybody got used to the idea that 802.11b sucked, 802.11g came along. Promising 54 screaming Mbps, 802.11g was still only half the speed of Fast Ethernet, but five times faster than original Ethernet! Right? Well, no. Just like 802.11b, the advertised speed was really the maximum physical layer data rate, not anything you could ever expect to see on a progress bar. And also like 802.11b, your best case scenario tended to be about a tenth of that—5 Mbps or so—and you'd be splitting that 5 Mbps or so among all the computers on the network, not getting it for each one of them like you would with a switched network.

802.11n was introduced to the consumer public around 2010, promising six hundred Mbps. Wow! Okay, so it's not as fast as the gigabit wired Ethernet that just started getting affordable around the same time, but six times faster than wired Fast Ethernet, right? Once again, a reasonable real-life expectation was around a tenth of that. Maybe. On a good day. To a single device.

The marketers had gotten the bit between their teeth on day one and never let go.

When 802.11ac came to market in late 2013, the boxes in stores hysterically proclaimed faster and faster speeds, many of them several times higher than the fastest consumer wired networking available. As the years went by, it was 1.3 Gbps! 2.7 Gbps! 5.3 Gbps! But by then, I'd long since stopped paying attention. The marketers had gotten the bit between their teeth on day one and never let go. Wi-Fi is nowhere near as fast as wired; the marketing is all lies, lesson learned.

Having given up being excited about Wi-Fi a long time ago, I found it deeply weird when Wi-Fi mesh exploded on the market in 2016, and I wound up reviewing it in-depth.

Unpacking the marketing copy

Let's say a wireless router offers you an "AC5300" router with "breakthrough tri-band Wi-Fi technology with amazing combined wireless speeds of up to 5,332 Mbps. Thanks to 4x4 data streams, that can be combined through beamforming and MU-MIMO technology to increase reliability and range." (Actual ad copy from a modern router. It's not just D-Link, though—Netgear, Linksys, ASUS, and TP-Link all do the same thing.) By now, we hopefully know that absolutely does not mean we're going to connect a laptop and download things at 600+ MB/sec. But what does it mean?

Things get murky when we try to unpack that "AC5300" speed rating. The way these things are generated is by taking the maximum PHY rate of each radio in the router, multiplied by the maximum number of MIMO streams that radio supports, and adding them all together. The DIR-895L/R is a tri-band device and can transmit and receive on three different Wi-Fi channels at once: two 5 GHz channels and one 2.4 GHz channel. Assuming you don't have any congestion from your neighbors' networks, that means you can connect three devices—say, a laptop, a smartphone, and a tablet—all at once, to different radios and on different channels. So far, so good!

We have two 5 GHz radios with 80MHz wide channels and a 2.4 GHz radio with a 40 MHz wide channel, each of which supports up to four MIMO streams. Unfortunately, this doesn't add up right—433 Mbps per 80 MHz wide 5GHz channel multiplied by four spatial streams comes out to 1,732 Mbps, and D-Link is claiming 2,166 Mbps per 5GHz radio. Where's that extra 108.5 Mbps per stream coming from? You won't find a straightforward answer to that question, but depending on your level of cynicism, it's either "proprietary extensions to 802.11 that your device may or may not support enabling compression that your data may or may not be suitable for," or "marketing lol." This is a pretty standard practice now, and it's the reason why some 3x3 dual-band routers are suddenly jumping from "AC1700" to "AC1900."

It gets even worse when you examine the 2.4GHz portion of that "AC5300" rating. D-Link is claiming 1,000 Mbps for the 2.4GHz radio. The PHY rate for 40 MHz wide 802.11n 2.4GHz channels is 150 Mbps, though, and 150 Mbps multiplied by four MIMO streams is 600 Mbps. Where's that missing 400 Mbps coming from? Honestly, it's anyone's guess—but it looks like they're giving themselves an extra 50 Mbps per stream by assuming 256-QAM modulation on 2.4GHz spectrum, even though that's a non-standard, non-IEEE-approved setup that very few devices will support. That gets you to 800 Mbps. You're still 200 Mbps short of the 1,000 Mbps claimed, but that's the same 20 percent that D-Link granted themselves for "compression" on the 5GHz spectrum—so Bob's your uncle. Probably.

If what you're getting out of this so far is "the AC speed rating is always a lie," you're not wrong. So let's get back to what we can actually, hopefully, kinda, expect out of all this.

First of all, let's talk about that "4×4 MIMO." It's great that the router has it, but your client devices—laptops, tablets, and smartphones—don't. As of February 2017, almost all client devices are either single-stream, or 2×2. Those extra streams aren't doing you any good if your client devices can't use them. You might think that's okay; you can use two MIMO streams for your laptop and two for your tablet. Sorry, but still no—that's MU-MIMO, which your router may or may not support but your client devices almost certainly don't. (A very few flagship smartphones like the Galaxy S7 support MU-MIMO, but the only MU-MIMO laptop cards I've been able to obtain so far have been off-market bespoke interfaces provided directly by hardware vendors.) It's also pretty theoretical; the small amount of MU-MIMO testing I've managed does show some promise—but it looks better in terms of fairly distributing bandwidth among the MU-MIMO clients than it does for raw performance increases. When I tested enabling MU-MIMO on a router with two MU-MIMO clients connected, it only bumped up their total throughput by about 20 percent. What your client devices all, or almost all, support is SU-MIMO, which still only allows a single device to communicate with the access point at any given time. So if your fastest client device has a 2×2 radio, all you're ever getting out of that access point is 2×2 speeds, period.

So far, we've chopped that "AC5300 up to 5.3 Gbps" router down to one radio at a time, which they're claiming 2.166 Gbps for. Then we laughed off the "extra speed for compression" that won't help our JPEGs, MP3s, gzip-compressed HTTP transmissions, or basically anything else we're likely to care about, which brought us down to 1.732 Gbps. Then we realized we can only connect on two of those four MIMO streams in the ad copy, which brought us down to 866 Mbps.

Are we done yet? Sadly, no. You are never going to see a device actually moving data at PHY rate outside of a carefully designed stream of UDP traffic in an RF isolated and anechoic clean room.

Under ideal real-world conditions (10 feet or so distance, no intervening walls, no interference or competition), a single high-quality client device will generally get somewhere between one-third to two-thirds of the PHY rate for the channel it's connected to, multiplied by the number of MIMO streams it can transmit and receive on. The Qualcomm Atheros AR9462 802.11n 2x2 adapter in my Acer C720 Chromebook (and in a small army of cheap laptops I use for testing) maxes out at somewhere around 205 Mbps, roughly two-thirds of the PHY rate of two 5GHz, 64-QAM, 40 MHz wide MIMO streams. The TP-Link Archer T4U and Linksys WUSB-6300 802.11ac USB3 adapters I use for testing—also 2×2 devices—can almost hit 350 Mbps, which is about 40 percent of PHY. Macbook Pros with the Broadcom BCM94360CS, paired with the right router, can hit real-world ideal condition speeds of 600-ish Mbps... but they're 3×3 adapters, putting them right back in that same "one-third to two-thirds" bracket.

Now let's realize that most of the time, we're probably not going to be sitting 10 feet away from the router with a completely clear line of sight—half the reason we're wireless in the first place is so that we can wander around the whole house. You're thirty feet away with two or more walls between you and the router before you realize it, and now you're looking at more like 80 Mbps... and that's assuming you've got a great wireless client device, a really good access point, and you don't have any other people or devices competing with you for that radio's attention.

If you aren't disgusted enough with the whole thing yet... a lot of these devices exhibit a strong directional bias, too. The Linksys WUSB-6300 gets about the same speeds up or down, but the Qualcomm AR9462 and the Archer T4U both strongly prefer download to upload, with upload speeds frequently half as good as download, or even worse... and different client device designs, even with the same chipset under the hood, can perform very differently (the WUSB-6300 and the T4U are both Realtek RTL8812au devices).

Testing Wi-Fi is a mess.

A tale of two problems: Signal, and interference

With wired Ethernet, the maximum cable length is 100 meters—long enough to stretch across an American football field, and then some—and performance at 100 meters is the same as it is at 10 meters. With Wi-Fi, your range is... well, it is what it is; it's going to depend partly on RF signal strength, partly on distance, partly on obstacles, partly on RF multi-path problems, all of which directly and visibly impact the speed and quality of the connection. All of this makes it easy to focus on RF signal strength as the solution to all of your problems. We've all been trained to look for "more bars" in our wireless connections, whether Wi-Fi or cellular. This makes the solution to any problem look deceptively simple: higher powered transmitter! More bars! And if you live in a large house with a decent-sized yard, it might really be that simple for you, or at least close to it: more bars, better speed, happy customer.

Unfortunately, signal strength doesn't tell the whole story—we also have to worry about interference, and that's where things get hairy.

If you have a significant amount of interference on an Ethernet cable, you consider that a problem and you fix it. If you have a significant amount of RF interference on a Wi-Fi network, you consider that a day that ends in "Y" and you live with it. At the simplest level, RF interference on the same frequency as a Wi-Fi signal degrades it in the same way any noise interferes with a human conversation. On this level, it's easy to think more signal strength is still the answer—after all, if the music's loud and the air conditioner is running, what do you do? You speak up!

So far, this is easy to understand on an instinctive level: if you can't be heard, you speak up, problem solved. It's a popular approach, and I've reviewed and used a lot of products that use it: Netgear Nighthawk and Orbi, Archer C7, and Google Wifi all produce enough RF to annoy a neighbor three houses away. The problem is, that's not how Wi-Fi actually works.

Drowning out the neighbors' Wi-Fi

Let's be honest here: an awful lot of us, me included, are pretty much fine with the idea of "drowning out the neighbors" Wi-Fi with a higher powered router of our own. We're right back to that instinctual model of a conversation: the signal from the neighbors' Wi-Fi is pretty weak; if ours is strong enough, we can drown it out, and if that makes a problem for them they can either suck it up or go get a higher-powered router of their own, right?

This is a very human approach, but it's not a very effective one. Consider a conversation at a crowded bar: you're really intent on what your friend or date is saying, but the two of you are competing with the conversations on either side of you and behind you, as well as the music playing. So, naturally, you speak up! Unfortunately, what happens then is the people all around you get louder, too, resulting in a zero-sum game which ends up with everybody yelling and nobody able to understand anything very well.

Wireless networking doesn't work that way. It's engineered, not instinctual, and so the standard directly prevents devices "shouting over" one another. Instead of having a crowded-bar-style competition for bandwidth, each device must wait for a chance to "speak," clearly and without competition from other devices. In technical terms, a Wi-Fi network is a collision domain, and this enforced politeness helps avoid packet collision. This is well worth doing, because if packets do collide, each device then has to stop transmitting, wait a random interval of time, then try again—thereby, hopefully letting one machine start "talking" enough before the other that they don't drown one another out again. (If they picked the same random number, then they'll collide again and have to start the whole process over again.)

Most technical people understand this about their own networks, but many don't realize that it's not just your Wi-Fi devices that are all on a single collision domain—it's all Wi-Fi devices on the same channel. Any, repeat, any transmission on the same channel ties up that channel, even if it's on a different network with a different SSID and different WPA key. The 802.11 wireless network specification uses Clear Channel Assessment to determine whether the channel is "busy" or not, and if CCA says "occupied," the wireless device has to wait its turn. If your laptop, phone, or tablet can "hear" the preamble of another 802.11 transmission at -82 dBM, whether it's on your network or not, it has to sit tight, shut up, and wait its turn to speak.

Even if your device can't understand a preamble, any RF signal strength at -62 dBm is enough to make the channel "busy" for 802.11a/b/g/n networks or -72 dBm for 802.11ac devices. This is not a lot of signal strength—I can frequently "see" ten or more SSIDs with -82 dBm from my living room—and it gets worse from there. Even if a neighbor's router is on the other side of the house and only shows intermittently at -90 dBm, you're not in the clear—their teenager's laptop might be in the bedroom closest to yours and transmitting at -58 dBm relative to you every time they send a video on Snapchat.

Aside from the increased speed, this is why 5GHz networking is an improvement over 2.4 GHz networking—on the plus side, 2.4 GHz has more range and penetration, but on the minus side, 2.4 GHz has more range and penetration. In an apartment complex or densely packed subdivision with postage-stamp-sized yards, your devices will "see" 2.4 GHz networks—and have to cede airtime to them—for three or four times the distance they can "see" 5 GHz networks.

Competing with yourself

If you're big into the Internet of Things, and you've got everything from Hue light bulbs to Samsung refrigerators to smart door locks and thermostats, I hope you paid attention to that last section—it's the reason why sometimes your Wi-Fi sucks and your devices drop off the network even though you've got four bars everywhere in the house. If your smart TV is streaming 4K Netflix and your kid is watching YouTube and your spouse is playing DOTA, there may not be enough bandwidth left for the thermostat in the living room to get a packet in edgewise, and adding more RF signal strength isn't going to fix the problem.

As we keep adding more and more devices, and our neighbors keep adding more and more devices, the problem keeps getting worse. And higher-powered devices are a double-edged sword—the higher the TX strength and RX sensitivity, the wider the collision domain—and the more devices we're competing for airtime with. In a nutshell, the answer here is—and has to be—lower powered networks that don't reach as far with working roaming to hand you off from one access point to the next as you move around. This limits the number of devices in each collision domain and makes more spectrum available in each physical location (since different locations aren't competing with one another, allowing you to reuse spectrum at lower distances).

So, mesh...?

Wi-Fi mesh is generally marketed along well-understood lines of signal strength. Get more bars everywhere! But the real promise of mesh isn't something as crude as just extending a signal farther. For the most part, that's pretty easy—bolt on a higher-powered transmitter and a more sensitive receiver and go to town. You don't even necessarily need mesh for that at all—something like an Archer C7 will cover serious distances for less than a hundred bucks. Where mesh gets really interesting is in the possibility of using multiple access points to divide your network up into smaller collision domains, with fewer devices to compete with. The closer your clients are to an access point, the lower latency, the lower power needed, and—crucially—the fewer devices they have to compete with, if they're all smart enough to use the lowest transmit strength necessary.

We're still in the infancy of this stage of development. Although network engineers who deploy Wi-Fi access points in conference centers and airports generally understand the concepts of deliberately low transmit strength and careful use of spectrum to limit collisions, most home and small business gear still gets designed and marketed more simply—hot signal, big numbers, more power. But things are slowly starting to evolve.

Plume is the obvious poster child for this kind of next-generation strategy, with its emphasis on splitting your network into smaller collision domains to address congestion issues, rather than using high-powered devices to maximize ideal single-client performance. But the rest of the industry is getting there too, slowly but surely. Eero still uses the same channels for all devices, but it has gotten much more aggressive about splitting clients between bands, rather than trying to cram anything and everything in range onto the theoretically "faster" 5 GHz band. The new Linksys Velop spreads 2.4 GHz bands—using a different channel for each access point—but uses the same pair of 5 GHz bands on each (which is a shame, since they could use a shared backhaul 5 GHz channel but offer different client-facing 5 GHz channels at each AP). And AmpliFi HD is also splitting the 2.4 GHz spectrum, using a shared 5 GHz channel for backhaul but offering different 2.4 GHz channels at each access point... and its newer firmware versions are (intelligently) much more likely to steer clients to those 2.4 GHz channels, which do not compete with either each other or the 5 GHz backhaul.

Conclusion

RF signal strength isn't everything. Neither is a simple speed test. The more devices you have to contend with—your own, your family's, and your neighbors'—the more complicated everything gets. The Internet of Things will make absolutely certain that number keeps getting bigger, too, as everything from refrigerators to washing machines to light bulbs clamor for Internet access.

If you're a technical person and you're interested in embiggening your Wi-Fi, you should probably be looking less at how large the AC speed rating is and more at how many radios you're using, how many different bands you can make service available on, and how efficiently your access points can get data back to your router. The simplest answer is always going to be "wire up as many things as you can"—the fewer devices connected to Wi-Fi, the less competition for Wi-Fi, the better the Wi-Fi performs. The same thing goes for your access points themselves—if they have wired backhaul to the router (Eero, Plume, Velop, and Ubiquiti UAPs can all be connected wired) then they can cover more spectrum without interfering with one another.