At E3 this past year, Xbox chief Phil Spencer made a surprising admission that work on next-gen consoles had already begun, and even let slip the plural “devices” when talking about what the next generation of Xbox codenamed Scarlett might look like. According to a report from tech site Thurrott, that blurry picture is starting to come into focus with Microsoft planning to make a traditional console as well as a more capable streaming box as a cheaper option.
First, Thurrott reports, Microsoft is indeed going to release a traditional console as they have always done. This means a singular box with up-to-date hardware inside to play games locally. It is unknown exactly what hardware it will contain as the development on the hardware seems to be early, but one would expect a generational leap over the Xbox One.
Where it gets interesting is that a second device is rumored to also be in the works. This box will be fairly hardware-light, mostly for accepting controller inputs, displaying an image to the screen, and collision detection according to Thurrott. That means it’s not just a streaming box as we know it, but one that offloads some computational ability to the local hardware. It is pointed out in the report that this makes the Xbox-lite a bit more expensive than you would think but still a lot cheaper than a traditional console.
The idea for Microsoft is to leverage their vast bank of Azure servers, comparable to the kind of cloud Amazon wields with Amazon Web Storage, for use with their consoles. Microsoft tried to apply Azure to the Xbox One, but trepidatious development due to consumer confusion, a lack of a cohesive vision for its use, and poor internet infrastructure put it on the backburner. It wasn’t until E3 this past year when Spencer began talking up cloud services once again, shortly before segueing into discussion of Scarlett.
The streaming box idea helps Microsoft escape the loss-leader model of traditional hardware by offering a solution that still heavily emphasizes a subscriber-based game distribution model. The company has already signaled their intentions in this arena by pursuing Game Pass, which provides a Netflix-like platform for games. A cloud-based Game Pass would probably close the circle and even, as we have heard from a few sources, allow Microsoft to shop the service to other devices.
Thurrott reports that Microsoft has cracked the latency issue with cloud-based gaming and likens the lag more to a multiplayer match than existing streaming solutions. They go on to say that the Xbox-Lite is further along in development than the traditional console and both are aiming for release in 2020.
The biggest problem with this technology is something Microsoft has no control over: infrastructure varies wildly, as do the whims of internet service providers. A cheaper Xbox saves you money in the long run if you’re hitting data caps with your single-player Assassin’s Creed game. Who knows what internet access will look like in North America in two years’ time, but the last two years have not exactly filled me with confidence that its on an upward trend toward consumers.
VMware Workspace ONE is an intelligence-driven digital workspace platform that simply and securely delivers and manages any app on any device by integrating access control, application management, and multi-platform endpoint management. It combines identity and mobility management to provide frictionless and secure access to all the apps and data that employees need to work, wherever, whenever, and from whatever device they choose. VMware Horizon 7 is the leading platform for Windows desktop and application virtualization, delivering virtual desktops, published Windows applications, shared desktop sessions from Windows Server instances using Microsoft Remote Desktop Services (RDS), and ThinApp-packaged applications. Horizon 7 supports both Windows- and Linux-based desktops.
For each of the individual product components, the guide gives an explanation and instruction that covers how to scale, ensure availability, provide disaster recovery, and deploy across multiple sites, which includes data replication, load balancing, and database failover cluster instance (FCI) setup.
Reference Architecture Design Methodology
To ensure a successful Workspace ONE deployment, it is important to follow proper design methodology.
Design begins by defining business requirements and drivers, which can be mapped to use cases that can be adapted to most scenarios. You can then align and map those uses cases to a set of integrated services provided by Workspace ONE.
A Workspace ONE design uses a number of components to provide the services that address the identified use cases. Before you can assemble and integrate these components to form a service that you deliver to end users, you must first design and build the components or products, according to best practices, in a modular and scalable manner to allow for change, growth, and integration into the existing environment. Only then can you bring the parts together to deliver the integrated services to satisfy the use cases, business requirements, and the user experience.
Service definitions are the blueprints that help you understand how to address the identified use cases. The service, for a use case, defines the unique requirements and identifies the technology or feature combinations that satisfy those unique requirements. The detail required to build out the products and components comes later, after the services are defined and the required components are understood.
The sample services defined in the reference architecture are modular to allow you to adapt the services to your particular use cases. In some cases, that might mean adding additional components, while in others it might be possible to remove some that are not required.
You can also combine multiple services to address more complex use cases. For example, you could combine a Workspace ONE service with a Horizon 7 service and a recovery service.
This reference architecture has undergone testing and validation with regards to component design and build, service build, integration, and user workflow to ensure that all objectives are met, that use cases are delivered properly, and that real-world implementation is achievable.
A network is a group of two or more computers that intelligently share hardware or software devices with each other. A network can be as small and simple as two computers that share a printer or as complex as the world’s largest network: the internet.
Intelligently sharing means that each computer that shares resources with another computer or computers maintains control of that resource. Thus, a USB switchbox for sharing a single printer between two or more computers doesn’t qualify as a network device; because the switchbox—not the computers—handles the print jobs, neither computer knows when the other one needs to print, and print jobs can potentially interfere with each other.
A shared printer, on the other hand, can be controlled remotely and can store print jobs from different computers on the print server’s hard disk. Users can change the sequence of print jobs, hold them, or cancel them. And, sharing of the device can be controlled through passwords, further differentiating it from a switchbox.
You can share or access many different types of devices over a network, but the most common devices include the following:
Entire drives or just selected folders can be shared with other users via the network.
In addition to reducing hardware costs by sharing expensive printers and other peripherals among multiple users, networks provide additional benefits to users:
A single Internet connection can be shared among multiple computers.
Electronic mail (email) can be sent and received.
Multiple users can share access to software and data files.
Files and folders can be backed up to local or remote shares.
Audio and video content can be streamed to multiple devices.
Multiple users can contribute to a single document using collaboration features.
Remote-control/access programs can be used to troubleshoot problems or show new users how to perform a task.
Types of Networks
Several types of networks exist, from small two-station arrangements, to networks that interconnect offices in many cities:
Local area networks—The smallest office network is referred to as a local area network (LAN). A LAN is formed from computers and components in a single office or building. LANs built from the same components as are used in office networks are also common at home.
Wide area networks—LANs in different locations can be connected by high-speed fiber-optic, satellite, or leased phone lines to form a wide area network (WAN).
The Internet—The World Wide Web is the most visible part of the world’s largest network, the Internet. The Internet is really a network of networks, all of which are connected to each other through Transmission Control Protocol/Internet Protocol (TCP/IP). It’s a glorified WAN in many respects. Programs such as web browsers, File Transfer Protocol (FTP) clients, and email clients are some of the most common ways users work with the Internet.
Intranets—Intranets use the same web browsers and other software and the same TCP/IP protocol as the public Internet, but intranets exist as a portion of a company’s private network. Typically, intranets comprise one or more LANs that are connected to other company networks, but, unlike the Internet, the content is restricted to authorized company users only. Essentially, an intranet is a private Internet.
Extranets—Intranets that share a portion of their content with customers, suppliers, or other businesses, but not with the general public, are called extranets. As with intranets, the same web browsers and other software are used to access the content.
Note: Both intranets and extranets rely on firewalls and other security tools and procedures to keep their private contents private.
Requirements for a Network
Unless the computers that are connected know they are connected and agree on a common means of communication and what resources are to be shared, they can’t work together. Networking software is just as important as networking hardware because it establishes the logical connections that make the physical connections work.
At a minimum, each network requires the following:
Physical (cable) or wireless (usually via radio frequency [RF]) connections between computers.
A common set of communications rules, known as a network protocol.
Software that enables resources to be served to or shared with other network-enabled devices and that controls access to the shared resources. This can be in the form of a network operating system or NOS (such as older versions of Novell Netware) that runs on top of an operating system; however, current operating systems (OSes), such as Windows, Mac OS X, and Linux also provide network sharing services, thus eliminating the need for a specialized NOS. A machine sharing resources is usually called a server.
Resources that can be shared, such as printers, drives, modems, media players, and so on.
Software that enables computers to access other computers sharing resources (servers). Systems accessing shared resources are usually called network clients. Client software can be in the form of a program or service that runs on top of an OS. Current OSes, such as Windows, Mac OS X, and Linux include client software.
These rules apply both to the simplest and the most powerful networks, and all the ones in between, regardless of their nature. The details of the hardware and software you need are discussed more fully later in this chapter.
Client/Server Versus Peer Networks
Although every device on a LAN is connected to every other device, they do not necessarily communicate with each other. There are two basic types of LANs, based on the communication patterns between the machines: client/server networks and peer-to-peer networks.
On a client/server network, every computer has a distinct role: that of either a client or a server. A server is designed to share its resources among the client computers on the network. Typically, servers are located in secured areas, such as locked closets or data centers (server rooms), because they hold an organization’s most valuable data and do not have to be accessed by operators on a continuous basis. The rest of the computers on the network function as clients.
A dedicated server computer often has faster processors, more memory, and more storage space than a client because it might have to service dozens or even hundreds of users at the same time. High-performance servers typically use from two to eight processors (and that’s not counting multi-core CPUs), have many gigabytes of memory installed, and have one or more server-optimized network interface cards (NICs), RAID (Redundant Array of Independent Drives) storage consisting of multiple drives, and redundant power supplies. Servers often run a special network OS—such as Windows Server, Linux, or UNIX—that is designed solely to facilitate the sharing of its resources. These resources can reside on a single server or on a group of servers. When more than one server is used, each server can “specialize” in a particular task (file server, print server, fax server, email server, and so on) or provide redundancy (duplicate servers) in case of server failure. For demanding computing tasks, several servers can act as a single unit through the use of parallel processing.
A client computer typically communicates only with servers, not with other clients. A client system is a standard PC that is running an OS such as Windows. Current OSes contain client software that enables the client computers to access the resources that servers share. Older OSes, such as Windows 3.x and DOS, required add-on network client software to join a network.
By contrast, on a peer-to-peer network, every computer is equal and can communicate with any other computer on the network to which it has been granted access rights. Essentially, every computer on a peer-to-peer network can function as both a server and a client; any computer on a peer-to-peer network is considered a server if it shares a printer, a folder, a drive, or some other resource with the rest of the network. This is why you might hear about client and server activities, even when the discussion is about a peer-to-peer network.
Peer-to-peer networks can be as small as two computers or as large as hundreds of systems and devices. Although there is no theoretical limit to the size of a peer-to-peer network, performance, security, and access become a major headache on peer-based networks as the number of computers increases. In addition, Microsoft imposes a limit of only 5, 10 or 20 concurrent client connections to computers running Windows. This means that a maximum of 20 (or fewer) systems will be able to concurrently access shared files or printers on a given system. This limit is expressed as the “Maximum Logged On Users” and can be seen by issuing the NET CONFIG SERVER command at a command prompt. This limit is normally unchangeable and is fixed in the specific version and edition of Windows as follows:
5 users: Windows XP Home, Vista Starter/Home Basic
10 users: Windows NT, 2000, XP Professional, Vista Home Premium/Business/Enterprise/Ultimate
20 users: Windows 7 (all editions)
When more than the allowed limit of users or systems try to connect, the connection is denied and the client sees one of the following error messages:
Operating system error 71. No more connections can be made to this remote computer at this time because there are already as many connections as the computer can accept.
System error 71 has occurred. This remote computer has reached its connection limit, you cannot connect at this time.
Even though it is called a “Server” OS, Windows Home Server also has the same 10-connection limit as the non-Home client Windows versions of XP and Vista. If you need a server that can handle more than 10 or 20 clients, I recommend using a Linux-based server OS (such as Ubuntu Server) or one of the professional Windows server products (such as Windows 2000 Server, Server 2003, Server 2008, Essential Business Server, or Small Business Server). Peer-to-peer networks are more common in small offices or within a single department of a larger organization. The advantage of a peer-to-peer network is that you don’t have to dedicate a computer to function as a file server. Instead, every computer can share its resources with any other. The potential disadvantages to a peer-to-peer network are that typically less security and less control exist because users normally administer their own systems, whereas client/server networks have the advantage of centralized administration.
Note that the actual networking hardware (interface cards, cables, and so on) is the same in client/server versus peer-to-peer networks, it is only the logical organization, management and control of the network that varies.
Wired Ethernet: Network Basics
With tens of millions of computers connected by Ethernet cards and cables, Ethernet is the most widely used data-link layer protocol in the world. You can buy Ethernet adapters from dozens of competing manufacturers, and most systems sold in the past decade incorporate one or more built-in Ethernet ports.
Older adapters supported one, two, or all three of the cable types defined in the standard: Thinnet, Thicknet, and unshielded twisted pair (UTP). Current adapters support only UTP. Traditional Ethernet operates at a speed of 10 Mb/s, but the more recent standards push this speed to 100 Mb/s (Fast Ethernet) or 1000 Mb/s (gigabit Ethernet). Most desktop and even laptop systems now incorporate gigabit Ethernet. In the future we will likely see 10 gigabit Ethernet (also known as 10G Ethernet) appearing in desktop PCs. 10G Ethernet runs at 10 000 Mbps and is used primarily in enterprise data centers and servers.
Note: Throughout the remainder of this chapter, be aware that discussion of older Ethernet solutions (such as those using Thicknet or Thinnet) as well as alternative networks (such as Token-Ring) are only included for reference. You will usually encounter those technologies only when working on older, existing networks. New network installations today normally use Gigabit, Fast, or Wireless Ethernet.
Fast Ethernet requires adapters, hubs, switches, and UTP or fiber-optic cables designed to support its rated speed. Some early Fast Ethernet products supported only 100 Mb/s, but almost all current Fast Ethernet products are combination devices that run at both 10 Mb/s and 100 Mb/s, enabling backward compatibility with older 10 Mb/s Ethernet network hardware.
Note: Some specifications say that Fast Ethernet supports 200 Mb/s. This is because it normally runs in full-duplex mode (sends/receives data simultaneously), which gives it an effective speed of 200 Mb/s with both directions combined. Still, the throughput in any one direction remains the same 100 Mb/s. Full-duplex operation requires that all hardware in the connection, including adapters and switches, be capable of running in full-duplex and be configured to run in full-duplex (or automatically detect full-duplex signals).
Both the most popular form of Fast Ethernet (100BASE-TX) and 10BASE-T standard Ethernet use two of the four wire pairs found in UTP Category 5 cable. (These wire pairs are also found in Cat 5e, Cat 6, and Cat 6a cable.) An alternative Fast Ethernet standard called 100BASE-T4 uses all four wire pairs in UTP Category 5 cable, but this Fast Ethernet standard was never popular and is seldom seen today.
Gigabit Ethernet also requires special adapters, hubs, switches, and cables. When gigabit Ethernet was introduced, most installations used fiber-optic cables, but today it is far more common to run gigabit Ethernet over the same Category 5 UTP cabling (although better Cat 5e/6/6a is recommended) that Fast Ethernet uses. Gigabit Ethernet for UTP is also referred to as 1000BASE-T.
Unlike Fast Ethernet and standard Ethernet over UTP, Gigabit Ethernet uses all four wire pairs. Thus, gigabit Ethernet requires dedicated Ethernet cabling; you can’t “borrow” two wire pairs for telephone or other data signaling with gigabit Ethernet as you can with the slower versions. Most gigabit Ethernet adapters can also handle 10BASE-T and 100BASE-TX Fast Ethernet traffic, enabling you to interconnect all three UTP-based forms of Ethernet on a single network.
Gigabit Ethernet hardware was initially very expensive, thus limiting the use of gigabit Ethernet to high-end network interconnections. More recently, the prices of cables, adapters and especially switches has fallen dramatically, making gigabit the recommended choice for all new cable, adapter, and switch installations.
Neither Fast Ethernet nor gigabit Ethernet support the use of thin or thick coaxial cable originally used with traditional Ethernet, although you can interconnect coaxial cable–based and UTP-based Ethernet networks by using media converters or specially designed hubs and switches.
10 Gigabit Ethernet
10 gigabit Ethernet is a high-speed networking standard that incorporates many different types of physical interconnections including several that are fiber optic and copper based. Of all of the possible connection types, the only one relevant to PCs is called 10GBASE-T, which uses standard twisted-pair cables and 8P8C (RJ45) connectors just like Fast and gigabit Ethernet.
10 gigabit Ethernet (10GBASE-T) requires Category 6a (or better) cabling for support of connection distances up to 100 meters (328 feet). Lower grade Cat 6 cable can be used if the distance is limited to 55 meters (180 feet). Just as with gigabit Ethernet, all four pairs in the cable are used.
10 gigabit Ethernet hardware is currently very expensive, and limited to high-end network interconnections, typically between servers or as a backbone connection between multiple gigabit Ethernet networks. Once the prices of adapters and switches falls to be close to those for gigabit Ethernet, we will see 10 gigabit Ethernet start to become popular for PC-based networks. To prepare for a future upgrade to 10 gigabit Ethernet, consider installing only Category 6a or better cabling in any new installations.
Wireless Ethernet: 820.11a To 820.11g: Network Basics
The most common forms of wireless networking are built around various versions of the IEEE 802.11 wireless Ethernet standards, including IEEE 802.11b, IEEE 802.11a, IEEE 802.11g, and IEEE 802.11n.
Wireless Fidelity (Wi-Fi) is a logo and term given to any IEEE 802.11 wireless network product certified to conform to specific interoperability standards. Wi-Fi certification comes from the Wi-Fi Alliance, a nonprofit international trade organization that tests 802.11-based wireless equipment to ensure it meets the Wi-Fi standard. To carry the Wi-Fi logo, an 802.11 networking product must pass specific compatibility and performance tests, which ensure that the product will work with all other manufacturers’ Wi-Fi equipment on the market. This certification arose from the fact that certain ambiguities in the 802.11 standards allowed for potential problems with interoperability between devices. By purchasing only devices bearing the Wi-Fi logo, you ensure that they will work together and not fall into loopholes in the standards.
Note: The Bluetooth standard for short-range wireless networking, covered later in this chapter, is designed to complement, rather than rival, IEEE 802.11–based wireless networks.
The widespread popularity of IEEE 802.11–based wireless networks has led to the abandonment of other types of wireless networking such as the now-defunct HomeRF.
Note: Although products that are certified and bear the Wi-Fi logo for a particular standard are designed and tested to work together, many vendors of wireless networking equipment created devices that also featured proprietary “speed booster” technologies to raise the speed of the wireless network even further. This was especially common in early 802.11g devices, while newer devices conform more strictly to the official standards. Although these proprietary solutions can work, beware that most, if not all, of these vendor-specific solutions are not interoperable with devices from other vendors. When different vendor-specific devices are mixed on a single network, they use the slower common standard to communicate with each other.
When the first 802.11b wireless networking products appeared, compatibility problems existed due to certain aspects of the 802.11 standards being ambiguous or leaving loopholes. A group of companies formed an alliance designed to ensure that their products would work together, thus eliminating any ambiguities or loopholes in the standards. This was originally known as the Wireless Ethernet Compatibility Alliance (WECA) but is now known simply as the Wi-Fi Alliance (www.wi-fi.org). In the past, the term Wi-Fi has been used as a synonym for IEEE 802.11b hardware. However, because the Wi-Fi Alliance now certifies other types of 802.11 wireless networks, the term Wi-Fi should always be accompanied by the both the standards supported (that is 802.11a/b/g/n) as well as the supported frequency bands (that is 2.4 GHz and/or 5 GHz) to make it clear which products work with the device. Currently, the Alliance has certified products that meet the final versions of the 802.11a, 802.11b, 802.11g, and 802.11n standards in 2.4 GHz and 5 GHz bands.
The Wi-Fi Alliance currently uses a color-coded certification label to indicate the standard(s) supported by a particular device. The image below shows the most common versions of the label, along with the official IEEE standard(s) that the label corresponds to: 802.11a (orange background); 802.11b (dark blue background); 802.11g (lime green background); 802.11n (violet background).
IEEE 802.11b (Wi-Fi, 2.4 GHz band–compliant, also known as Wireless-B) wireless networks run at a maximum speed of 11 Mb/s, about the same as 10BASE-T Ethernet (the original version of IEEE 802.11 supported data rates up to 2 Mb/s only). 802.11b networks can connect to conventional Ethernet networks or be used as independent networks, similar to other wireless networks. Wireless networks running 802.11b hardware use the same 2.4 GHz spectrum that many portable phones, wireless speakers, security devices, microwave ovens, and the Bluetooth short-range networking products use. Although the increasing use of these products is a potential source of interference, the short range of wireless networks (indoor ranges up to approximately 150 feet and outdoor ranges up to about 300 feet, varying by product) minimizes the practical risks. Many devices use a spread-spectrum method of connecting with other products to minimize potential interference.
Although 802.11b supports a maximum speed of 11 Mb/s, that top speed is seldom reached in practice, and speed varies by distance. Most 802.11b hardware is designed to run at four speeds, using one of four data-encoding methods, depending on the speed range:
As distances change and signal strength increases or decreases, 802.11b hardware switches to the most suitable data-encoding method. The overhead required to track and change signaling methods, along with the additional overhead required when security features are enabled, helps explain why wireless hardware throughput is consistently lower than the rated speed. The figure below is a simplified diagram showing how speed is reduced with distance. Figures given are for best-case situations; building design and antenna positioning can also reduce speed and signal strength, even at relatively short distances.
The second flavor of Wi-Fi is the wireless network known officially as IEEE 802.11a. 802.11a (also referred to as Wireless-A) uses the 5 GHz frequency band, which allows for much higher speeds (up to 54 Mb/s) and helps avoid interference from devices that cause interference with lower-frequency 802.11b networks. Although real-world 802.11a hardware seldom, if ever, reaches that speed (almost five times that of 802.11b), 802.11a relatively maintains its speeds at both short and long distances.
For example, in a typical office floor layout, the real-world throughput (always slower than the rated speed due to security and signaling overhead) of a typical 802.11b device at 100 feet might drop to about 5 Mb/s, whereas a typical 802.11a device at the same distance could have a throughput of around 15 Mb/s. At a distance of about 50 feet, 802.11a real-world throughput can be four times faster than 802.11b. 802.11a has a shorter maximum distance than 802.11b (approximately 75 feet indoors), but you get your data much more quickly.
Given the difference in throughput (especially at long distances), and if we take the existence of 802.11g out of the equation for a moment, why not skip 802.11b altogether? In a single word: frequency. By using the 5 GHz frequency instead of the 2.4 GHz frequency used by 802.11b/g, standard 802.11a hardware cuts itself off from the already vast 802.11b/g universe, including the growing number of public and semipublic 802.11b/g wireless Internet connections (called hot spots) showing up in cafes, airports, hotels, and business campuses.
The current solution for maximum flexibility is to use dual-band hardware. Dual-band hardware can work with either 802.11a or 802.11b/g networks, enabling you to move from an 802.11b/g wireless network at home or at Starbucks to a faster 802.11a office network.
IEEE 802.11g, also known to some as Wireless-G, is a standard that offers compatibility with 802.11b along with higher speeds. The final 802.11g standard was ratified in mid-2003.
Although 802.11g is designed to connect seamlessly with existing 802.11b hardware, early 802.11g hardware was slower and less compatible than the specification promised. In some cases, problems with early-release 802.11g hardware can be solved through firmware or driver upgrades.
Note: Although 802.11b/g/n wireless hardware can use the same 2.4 GHz frequencies and can coexist on the same networks, when mixing different standards on the same network, the network will often slow down to the lowest common denominator speed. To prevent these slowdowns, you can configure access points to disable “mixed mode” operation, but this will limit the types of devices that can connect. For example, you can configure a 2.4 GHz Wireless-N access point to allow 802.11b/g/n connections (full mixed mode), or to only allow 802.11g/n (partial mixed mode) connections, or to only allow 802.11n connections. The latter offers the highest performance for Wireless-N devices. Similarly you can configure Wireless-G access points to allow 802.11b/g (mixed mode) operation, or to only allow 802.11g connections. Restricting or disabling the mixed mode operation offers higher performance at the expense of restricting the types of devices that can connect.
Wireless Ethernet: 802.11n And Bluetooth: Network Basics
The latest wireless network standard, 802.11n (also known as Wireless-N), was published in October 2009. 802.11n hardware uses a technology called multiple input, multiple output (MIMO) to increase throughput and range. MIMO uses multiple radios and antennas to transmit multiple data streams (also known as spatial streams) between stations.
Unlike earlier 802.11 implementations, in which reflected radio signals slowed down throughput, reflected radio signals can improve throughput as well as increase useful range.
802.11n is the first wireless Ethernet standard to support two frequency ranges or bands:
4 GHz (same as 802.11b/g)
5 GHz (same as 802.11a)
Thus, depending on the specific implementation of 802.11n in use, a dual-band 802.11n device may be able to connect with 802.11b, 802.11g, and 802.11a devices, whereas a single-band 802.11n device will be able to connect with 802.11b and 802.11g devices only.
Wireless-N devices can contain radios in a number of different configurations supported by the standard. The radios are defined or categorized by the number of transmit antennas, receive antennas, and data streams (also called spatial streams) they can support. A common notation has been devised to describe these configurations, which is written as a x b:c, where a is the maximum number of transmit antennas, b is the maximum number of receive antennas, and c is the maximum number of simultaneous data streams that can be used.
The maximum performance configuration supported by the standard is 4 x 4:4, (4 transmit/receive antennas and 4 data streams), which would support bandwidths of up to 600 Mb/s, however no devices are currently on the market using that configuration. Common configurations that are used in Wireless-N devices include 1 x 1:1, 1 x 2:1, and 2 x 2:1, which include radios with 1 or 2 antennas supporting only a single data stream for up to 150 Mb/s in bandwidth. Other common configurations include 2 x 2:2, 2 x 3:2, and 3 x 3:2, which include radios with 2 or 3 antennas supporting up to two data streams for up to 300 Mb/s in bandwidth. Those using more antennas than data streams allow for increased signal diversity and range. The highest performance Wireless-N devices generally available on the market today use a 3 x 3:3 radio configuration, which supports three data streams for up to 450 Mb/s in bandwidth.
802.11n is significantly faster than 802.11g, but by how much? That depends mainly on how many data streams are supported, as well as whether a couple of other optional features are enabled or not. The base configuration uses 20 MHz wide channels with an 800 ns guard interval between transmitted signals. By using channel bonding to increase the channel width to 40 MHz, more than double the bandwidth can be achieved in theory. I say “in theory” because using the wider channels works well under very strong signal conditions, but can degrade rapidly under normal circumstances. In addition, the wider channel takes up more of the band, causing more interference with other wireless networks in range. In the real world I’ve seen throughput decrease dramatically with 40 MHz channels, such that the use of 40 MHz channels is disabled by default on most devices.
Another optional feature is using a shorter guard interval (GI), which is the amount of time (in nanoseconds) the system waits between transmitting OFDM (orthagonal frequency division multiplexing) symbols in a data stream. By decreasing the guard interval from the standard 800 ns to an optional 400 ns, the maximum bandwidth increases by about 10%. Just as with channel bonding (40 MHz channel width), this can cause problems if there is excessive interference or low signal strength, resulting in decreased overall throughput due to signal errors and retries. However, in the real world the shorter guard interval doesn’t normally cause problems, so it is enabled by the default configuration in most devices.
Combining the use of three data streams using standard 20 MHz channels and the standard 800 ns guard interval, the maximum throughput of a Wireless-N connection would be 195 Mb/s. Using the shorter 400 ns guard interval would increase this to up to 216.7 Mb/s. As with other members of the 802.11 family of standards, 802.11n supports fallback rates when a connection cannot be made at the maximum data rate.
The Wi-Fi Alliance first began certifying products that support 802.11n in its Draft 2 form in June 2007. The 802.11n standard was finally published in October 2009, and 802.11n Draft 2 or later products are considered to be compliant with the final 802.11n standard. In some cases, driver or firmware updates might be necessary to insure ensure full compliance. As with previous Wi-Fi certifications, the Wi-Fi 802.11n certification requires that hardware from different makers interoperate properly with each other. 802.11n hardware uses chips from makers including Atheros, Broadcom, Cisco, Intel, Marvell, and Ralink.
Bluetooth is a low-speed, low-power standard originally designed to interconnect laptop computers, PDAs, cell phones, and pagers for data synchronization and user authentication in public areas such as airports, hotels, rental car pickups, and sporting events. Bluetooth is also used for a variety of wireless devices on PCs, including printer adapters, keyboards, mice, headphones, DV camcorders, data projectors, and many others. A list of Bluetooth products and announcements is available at the official Bluetooth wireless information website.
Bluetooth devices also use the same 2.4 GHz frequency range that most Wi-Fi devices use. However, in an attempt to avoid interference with Wi-Fi, Bluetooth uses a signaling method called frequency hopping spread spectrum (FHSS), which switches the exact frequency used during a Bluetooth session 1600 times per second over the 79 channels Bluetooth uses. Unlike Wi-Fi, which is designed to allow a device to be part of a network at all times, Bluetooth is designed for ad hoc temporary networks (known as piconets) in which two devices connect only long enough to transfer data and then break the connection. The basic data rate supported by Bluetooth is currently 1 Mb/s (up from 700 Kb/s in earlier versions), but devices that support enhanced data rate (EDR) can reach a transfer rate up to 2.1 Mb/s.
The current version of Bluetooth is 4.0, however versions 2.1 and later supports easier connections between devices such as phones and headsets (a process known as pairing), longer battery life, and improved security compared to older versions. Version 3.0 adds a high-speed mode based on Wi-Fi, while 4.0 adds low energy protocols for devices using extremely low power consumption.
Interference Issues Between Bluetooth and 802.11b/g/n Wireless
Despite the frequency-hopping nature of Bluetooth, studies have shown that Bluetooth 802.11b/g/n devices can interfere with each other, particularly at close range (under 2 meters) or when users attempt to use both types of wireless networking at the same time (as with a wireless network connection on a computer also using a Bluetooth wireless keyboard and/or mouse). Interference reduces throughput and in some circumstances can cause data loss.
Bluetooth version 1.2 adds adaptive frequency hopping to solve interference problems when devices are more than 1 meter (3.3 feet) away from each other. However, close-range (less than 1 meter) interference can still take place. IEEE has developed 802.15.2, a specification for enabling coexistence between 802.11b/g/n and Bluetooth. It can use various time-sharing or time-division methods to enable coexistence. Bluetooth version 2.1 is designed to minimize interference by using an improved adaptive hopping method, whereas 3.0 and later adds the ability to use 802.11 radios for high-speed transfers. Companies that build both Bluetooth and 802.11-family chipsets, such as Atheros and Texas Instruments (TI), have developed methods for avoiding interference that work especially well when same-vendor products are teamed together.
A few years ago, the second-most important choice you had to make when you created a network was which network protocol to use because the network protocol affects which types of computers your network can connect. Today, the choice has largely been made for you: TCP/IP has replaced other network protocols such as IPX/SPX (used in older versions of Novell NetWare) and NetBEUI (used in older Windows and DOS-based peer-to-peer networks and with Direct Cable Connection) because it can be used both for Internet and LAN connectivity. TCP/IP is a universal protocol that virtually all OSs can use.
Although data-link protocols such as Ethernet require specific types of hardware, network protocols are software and can be installed to or removed from any computer on the network at any time, as necessary. All the computers on any given network must use the same network protocol or protocol suite to communicate with each other.
IP and TCP/IP
IP stands for Internet Protocol; it is the network layer of the collection of protocols (or protocol suite) developed for use on the Internet and commonly known as TCP/IP.
Later, the TCP/IP protocols were adopted by the UNIX OSs. They have now become the most commonly used protocol suite on PC LANs. Virtually every OS with networking capabilities supports TCP/IP, and it is well on its way to displacing all the other competing protocols. Novell NetWare 6 and above, Linux, Windows XP and newer all use TCP/IP as their native network protocol.
TCP/IP: LAN and Dial-up Networks
TCP/IP, unlike the other network protocols listed in the previous section, is also a protocol used by people who have never seen a NIC. People who access the Internet via modems (this is referred to as dial-up networking in some older Windows versions) use TCP/IP just as those whose Web access is done with their existing LANs. Although the same protocol is used in both cases, the settings vary a great deal.
The following table summarizes the differences you’re likely to encounter. If you access the Internet with both modems and a LAN, you must ensure that the TCP/IP properties for modems and LANs are set correctly. You also might need to adjust your browser settings to indicate which connection type you are using.
Correct settings for LAN access to the Internet and dial-up networking (modem) settings are almost always completely different. In general, the best way to get your dial-up networking connection working correctly is to use your ISP’s automatic setup software. This is usually supplied as part of your ISP’s signup software kit. After the setup is working, view the properties and record them for future troubleshooting use.
The IPX protocol suite (often referred to as IPX/SPX) is the collective term for the proprietary protocols Novell created for its NetWare OS. Although based loosely on some of the TCP/IP protocols, Novell privately holds the IPX protocol standards. However, this has not prevented Microsoft from creating its own IPX-compatible protocol for the Windows OSs.
Internetwork Packet Exchange (IPX) is a network layer protocol that is equivalent in function to IP. The suite’s equivalent to TCP is the Sequenced Packet Exchange (SPX) protocol, which provides connection-oriented, reliable service at the transport layer.
The IPX protocols typically are used today only on networks with NetWare servers running older versions of NetWare. Often they are installed along with another protocol suite, such as TCP/IP. Novell has phased out its use of IPX for NetWare support and switched to TCP/IP—along with the rest of the networking industry—starting with NetWare 5. NetWare 5 uses IPX/SPX only for specialized operations. Most of the product uses TCP/IP. NetWare version 6 and above use TCP/IP exclusively.
NetBIOS Extended User Interface (NetBEUI) is a protocol that was used primarily on small Windows NT networks, as well as on peer networks based on Windows for Workgroups and Windows 9x. It was the default protocol in Windows NT 3.1, the first version of that OS. Later versions, however, use the TCP/IP protocols as their default.
Other Home Networking Solutions
If you are working at home or in a small office, you have an alternative to hole-drilling, pulling specialized network cabling, or setting up a wireless network.
So-called “home” networking is designed to minimize the complexities of cabling and wireless configuration by providing users with a sort of instant network that requires no additional wiring and configures with little technical understanding.
The two major standards in this area are
HomePNA (uses existing telephone wiring)
HomePlug (uses existing power lines and outlets)
Other than using Ethernet (wired or wireless), the most popular form of home networking involves adapting existing telephone wiring to networking by running network signals at frequencies above those used by the telephone system. Because HomePNA is the most developed and most broadly supported type of home networking, this discussion focuses on the HomePNA standards that the Home Phoneline Networking Alliance (www.homepna.org) has created. This alliance has most of the major computer hardware and telecommunications vendors among its founding and active membership.
The Home Phoneline Networking Alliance has developed three versions of its HomePNA standard. HomePNA 1.0, introduced in 1998, ran at only 1 Mb/s and was quickly superseded by HomePNA 2.0 in late 1999. HomePNA 2.0 supported up to 32 Mb/s, although most products ran at 10 Mb/s, bringing it to parity with 10BASE-T Ethernet. Although some vendors produced HomePNA 1.0 and 2.0 products, these versions of HomePNA never became popular. Both of these products use a bus topology that runs over existing telephone wiring and are designed for PC networking only.
With the development of HomePNA 3.1 in 2007, the emphasis of HomePNA has shifted from strictly PC networking to a true “digital home” solution that incorporates PCs, set-top boxes, TVs, and other multimedia hardware on a single network.
HomePNA 3.1 is the latest version of the HomePNA standard. In addition to telephone wiring, HomePNA 3.1 supports coaxial cable used for services such as TV, set-top boxes, and IP phones. As shown in the figure below, HomePNA 3.1 incorporates both types of wiring into a single network that runs at speeds up to 320 Mb/s, carries voice, data, and IPTV service, and provides guaranteed quality of service (QoS) to avoid data collisions and avoid disruptions to VoIP and streaming media. HomePNA refers to the ability to carry VoIP, IPTV, and data as a “triple-play.” HomePNA 3.1 also supports remote management of the network by the service provider.
Because HomePNA 3.1 has been designed to handle a mixture of traditional data and Internet telephone (VoIP) and TV (IPTV) service, HomePNA 3.1 hardware is being installed and distributed by telephone and media providers, rather than being sold primarily through retail channels. For example, AT&T uses HomePNA 3.1 for its AT&T U-verse IPTV, broadband, and VoIP service.
Power Line Networking
Home networking via power lines has been under development for several years, but electrical interference, inconsistent voltage, and security issues made the creation of a workable standard difficult until mid-2001. In June 2001, the HomePlug Powerline Alliance, a multi-vendor industry trade group, introduced its HomePlug 1.0 specification for 14Mbps home networking using power lines. The HomePlug Powerline Alliance (www.homeplug.org) conducted field tests in about 500 households early in 2001 to develop the HomePlug 1.0 specification.
HomePlug 1.0 is based on the PowerPacket technology developed by Intellon. PowerPacket uses a signaling method called orthogonal frequency division multiplexing (OFDM), which combines multiple signals at different frequencies to form a single signal for transmission. Because OFDM uses multiple frequencies, it can adjust to the constantly changing characteristics of AC power. To provide security, PowerPacket also supports 56-bit DES encryption and an individual key for each home network. By using PowerPacket technology, HomePlug 1.0 is designed to solve the power quality and security issues of concern to a home or small-office network user. Although HomePlug 1.0 is rated at 14 Mb/s, typical real-world performance is usually around 4 Mb/s for LAN applications and around 2 Mb/s when connected to a broadband Internet device such as a cable modem.
HomePlug 1.0 products include USB and Ethernet adapters, bridges, and routers, enabling most recent PCs with USB or Ethernet ports to use Powerline networking for LAN and Internet sharing. Linksys was the first to introduce HomePlug 1.0 products in late 2001; other leading vendors producing HomePlug hardware include Phonex, Netgear, and Asoka. HomePlug Turbo, an updated version of the HomePlug 1.0 standard, supports data rates up to 85 Mb/s, with typical throughput in the 15 Mb/s–20 Mb/s range.
The HomePlug AV specification with support for faster speeds (up to 200 Mb/s), multimedia hardware, and guaranteed bandwidth for multimedia applications was announced in the fall of 2002; the final HomePlug AV specification was approved in August 2005. When connecting HomePlug products, make sure that all of the devices support the same standard; that is, either HomePlug 1.0 (85 Mb/s) or HomePlug AV (200 Mb/s). Although HomePlug 1.0 and AV devices can coexist on the same powerline wiring, they can only communicate with devices supporting the same standard.
HomePlug AV2 is currently under development as the next generation for HomePlug, which will support 600 Mb/s speed.
The HomePlug Powerline Alliance uses certification marks to indicate which HomePlug certifications are supported by a particular device. The image below shows the original and new HomePlug certification marks.
In the McAfee Labs Threats Report June 2018, published today, we share investigative research and threat statistics gathered by the McAfee Advanced Threat Research and McAfee Labs teams in Q1 of this year. We have observed that although overall new malware has declined by 31% since the previous quarter, bad actors are working relentlessly to develop new technologies and tactics that evade many security defenses.
These are the key campaigns we cover in this report.
Deeper investigations reveal that the attack targeting organizations involved in the Pyeongchang Winter Olympics in South Korea used not just one PowerShell implant script, but multiple implants, including Gold Dragon, which established persistence to engage in reconnaissance and enable continued data exfiltration.
The infamous global cybercrime ring known as Lazarus has resurfaced. We discovered that the group has launched the Bitcoin-stealing phishing campaign “HaoBao,” which targets the financial sector and Bitcoin users.
We are also seeing the emergence of a complex, multisector campaign dubbed Operation GhostSecret, which uses many data-gathering implants. We expect to see an escalation of these attacks in the near future.
Here are some additional findings and insights:
Ransomware drops: New ransomware attacks took a significant dive (-32%), largely as a result of an 81% drop in Android lockscreen malware.
Cryptojacking makes a comeback: Attackers targeting cryptocurrencies may be moving from ransomware to coin miner malware, which hijacks systems to mine for cryptocurrencies and increase their profits. New coin miner malware jumped an astronomical 1,189% in Q1.
LNK outpaces PowerShell: Cybercriminals are increasingly using LNK shortcuts to surreptitiously deliver malware. New PowerShell malware dropped 77% in Q1, while attacks leveraging Microsoft Windows LNK shortcut files jumped 24%.
Incidents go global: Overall security incidents rose 41% in Q1, with incidents hitting multiple regions showing the biggest increase, at 67%, and the Americas showing the next largest increase, at 40%.
Did you know Grammarly has a product for just about every kind of writing you do? We have an online editor for drafting long documents, plus desktop apps and a Microsoft Office add-in if you prefer not to write in your browser. The Grammarly Keyboard for iOS and Android keeps you looking polished even when you’re writing from your phone. And of course, there’s the Grammarly browser extension, which checks your writing on all your favorite websites.
How does Grammarly check your writing?
Underlying all of Grammarly’s products is a sophisticated artificial intelligence system built to analyze sentences written in English. Grammarly’s team of computational linguists designs cutting-edge algorithms that learn the rules and hidden patterns of good writing by analyzing millions of sentences from research corpora. (A corpus is a large collection of text that has been organized and annotated for research purposes.) When you write with Grammarly, our AI analyzes each sentence and looks for ways to improve it, whether it’s correcting a verb tense, suggesting a stronger synonym, or offering a clearer sentence structure.
As you can imagine, a complex AI system like this one requires a lot of computing power—much more than a personal computer or mobile device can provide. For that reason, it runs in the cloud, rather than locally on your device. All you need to check your writing with a Grammarly product is an Internet connection.
When you use Grammarly, you can help improve its suggestions. Anytime you hit “ignore” on an unhelpful suggestion, Grammarly gets a little bit smarter. Over time, our team can make adjustments to the suggestions with high ignore rates to make them more helpful.
There’s more to good writing than grammar and spelling
Grammarly’s earliest breakthroughs in AI-powered writing enhancement happened in the realm of grammar, spelling, and punctuation correction—a fact that’s reflected in our name to this day. We could have stopped there, but the truth is, just because something’s grammatically correct doesn’t mean it’s clear or compelling.
Over the years, we’ve continually added new types of feedback to help you fix wordiness, vagueness and hedging, poor word choice, gnarly sentence structure, and even plagiarism. We add new writing checks all the time, so when you see a suggestion you don’t remember encountering before, it’s probably not your imagination.
All about context
Grammarly’s writing tools are designed to work where you do—on your phone and your computer, in your web browser or your word processor. The difference between Grammarly and built-in spelling and grammar checkers isn’t just accuracy or breadth of feedback. It’s also contextual awareness. After all, an email to your boss probably shouldn’t sound like a text to your best friend.
Grammarly’s browser extension, for example, makes stricter grammar corrections and offers suggestions to help you sound more formal and professional when you’re writing on LinkedIn. Grammarly Premium users can adjust their style settings for any text field on any site. When you’re writing something formal, you can switch to the academic or business settings to flag contractions, unclear antecedents, and other casualisms you want to avoid. But when you’re posting on Facebook and you want to write in a more relaxed voice, Grammarly’s casual setting will turn off alerts for the passive voice and informalities like slang and sentence fragments.
It’s easy to get started
Ready to give it a try? Installation is simple and free. Read on for some helpful tips about Grammarly’s products.
When you add the Grammarly extension to your browser, you’ll be able to directly access Grammarly’s writing suggestions from Gmail, LinkedIn, Twitter, Facebook, and most other sites on the web. You’ll know it’s working when you see a green G in the lower right corner of the text field you’re writing in. Basic writing corrections will appear inline, and clicking the green G allows you to open a more robust pop-up editor to access Premium corrections.
Adding the Grammarly Keyboard to your iPhone or Android device helps you write clearly and effectively in any app, on any website. So you can say goodbye to textfails, and you can relax when you need to answer an urgent email on the go.
Our team is working hard to bring you products and features that help you express yourself. To learn more about what that means and to get an idea of where we’re headed, check out our post about Grammarly’s vision of creating a comprehensive communication assistant.
Microsoft is bringing another update to its Android launcher, although for the moment the changes are only available in the beta channel. If you’d like to get a taste of what’s coming to Microsoft Launcher, you can always enroll in the beta program, but keep in mind that some features might not work as intended.But don’t worry if you don’t like testing new features before they released, we’ll let you know what to expect from the upcoming update. As the title says, Microsoft will be adding a custom app icon and folder gestures, but there are a few other changes included in the update.For …
Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own.
And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created.
That’s what’s behind today’s call from Microsoft President Brad Smith for government to start thinking about how to oversee the facial recognition technology that’s now at the disposal of companies like Microsoft, Google, Apple and government security and surveillance services across the country and around the world.
In what companies have framed as a quest to create “better,” more efficient and more targeted services for consumers, they have tried to solve the problem of user access by moving to increasingly passive (for the user) and intrusive (by the company) forms of identification — culminating in features like Apple’s Face ID and the frivolous filters that Snap overlays over users’ selfies.
Those same technologies are also being used by security and police forces in ways that have gotten technology companies into trouble with consumers or their own staff. Amazon has been called to task for its work with law enforcement, Microsoft’s own technologies have been used to help identify immigrants at the border (indirectly aiding in the separation of families and the virtual and physical lockdown of America against most forms of immigration) and Google faced an internal company revolt over the facial recognition work it was doing for the Pentagon.
Smith posits this nightmare scenario:
Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like “Minority Report,” “Enemy of the State” and even “1984” – but now it’s on the verge of becoming possible.
What’s impressive about this is the intimation that it isn’t already happening (and that Microsoft isn’t enabling it). Across the world, governments are deploying these tools right now as ways to control their populations (the ubiquitous surveillance state that China has assembled, and is investing billions of dollars to upgrade, is just the most obvious example).
In this moment when corporate innovation and state power are merging in ways that consumers are only just beginning to fathom, executives who have to answer to a buying public are now pleading for government to set up some rails. Late capitalism is weird.
But Smith’s advice is prescient. Companies do need to get ahead of the havoc their innovations can wreak on the world, and they can look good while doing nothing by hiding their own abdication of responsibility on the issue behind the government’s.
“In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up – and to act,” Smith writes.
The fact is, something does, indeed, need to be done.
As Smith writes, “The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial recognition – the ability of a computer to recognize people’s faces from a photo or through a camera. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike.”
All of this takes on faith that the technology actually works as advertised. And the problem is, right now, it doesn’t.
In an op-ed earlier this month, Brian Brackeen, the chief executive of a startup working on facial recognition technologies, pulled back the curtains on the industry’s not-so-secret huge problem.
Facial recognition technologies, used in the identification of suspects, negatively affects people of color. To deny this fact would be a lie.
And clearly, facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens — and a slippery slope to losing control of our identities altogether.
There’s really no “nice” way to acknowledge these things.
Smith, himself admits that the technology has a long way to go before it’s perfect. But the implications of applying imperfect technologies are vast — and in the case of law enforcement, not academic. Designating an innocent bystander or civilian as a criminal suspect influences how police approach an individual.
Those instances, even if they amount to only a handful, would lead me to argue that these technologies have no business being deployed in security situations.
As Smith himself notes, “Even if biases are addressed and facial recognition systems operate in a manner deemed fair for all people, we will still face challenges with potential failures. Facial recognition, like many AI technologies, typically have some rate of error even when they operate in an unbiased way.”
While Smith lays out the problem effectively, he’s less clear on the solution. He’s called for a government “expert commission” to be empaneled as a first step on the road to eventual federal regulation.
That we’ve gotten here is an indication of how bad things actually are. It’s rare that a tech company has pleaded so nakedly for government intervention into an aspect of its business.
But here’s Smith writing, “We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology. As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.”
Given the current state of affairs in Washington, Smith may be asking too much. Which is why perhaps the most interesting — and admirable — call from Smith in his post is for technology companies to slow their roll.
“We recognize the importance of going more slowly when it comes to the deployment of the full range of facial recognition technology,” writes Smith. “Many information technologies, unlike something like pharmaceutical products, are distributed quickly and broadly to accelerate the pace of innovation and usage. ‘Move fast and break things’ became something of a mantra in Silicon Valley earlier this decade. But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.”
Thanks to my colleague Christiaan Beek for his advice and contributions.
While researching underground hacker marketplaces, the McAfee Advanced Threat Research team has discovered that access linked to security and building automation systems of a major international airport could be bought for only US$10.
The dark web contains RDP shops, online platforms selling remote desktop protocol (RDP) access to hacked machines, from which one can buy logins to computer systems to potentially cripple cities and bring down major companies.
RDP, a proprietary protocol developed by Microsoft that allows a user to access another computer through a graphical interface, is a powerful tool for systems administrators. In the wrong hands, RDP can be used to devastating effect. The recent SamSam ransomware attacks on several American institutions demonstrate how RDP access serves as an entry point. Attacking a high-value network can be as easy and cheap as going underground and making a simple purchase. Cybercriminals like the SamSam group only have to spend an initial $10 dollars to get access and are charging $40K ransom for decryption, not a bad return on investment.
A screenshot of Blackpass.bz, one of the most popular RDP-shops, largely due to the variety of services offered.
Security maven Brian Krebs wrote the article “Really Dumb Passwords” in 2013. That short phrase encapsulates the vulnerability of RDP systems. Attackers simply scan the Internet for systems that accept RDP connections and launch a brute-force attack with popular tools such as, Hydra, NLBrute or RDP Forcer to gain access. These tools combine password dictionaries with the vast number of credentials stolen in recent large data breaches. Five years later, RDP shops are even larger and easier to access.
The McAfee Advanced Threat Research team looked at several RDP shops, ranging in size from 15 to more than 40,000 RDP connections for sale at Ultimate Anonymity Service (UAS), a Russian business and the largest active shop we researched. We also looked at smaller shops found through forum searches and chats. During the course of our research we noticed that the size of the bigger shops varies from day to day with about 10%. The goal of our research was not to create a definitive list of RDP shops; rather, we sought a better understanding of the general modus operandi, products offered, and potential victims.
The number of compromised systems claimed to be available for sale by several RDP shops. A single compromised system can appear on more than one shop’s list.
RDP access by cybercriminals
How do cybercriminals (mis)use RDP access? RDP was designed to be an efficient way to access a network. By leveraging RDP, an attacker need not create a sophisticated phishing campaign, invest in malware obfuscation, use an exploit kit, or worry about antimalware defenses. Once attackers gain access, they are in the system. Scouring the criminal underground, we found the top uses of hacked RDP machines promoted by RDP shops.
False flags: Using RDP access to create misdirection is one of the most common applications. While preserving anonymity, an attacker can make it appear as if his illegal activity originates from the victim’s machine, effectively planting a false flag for investigators and security researchers. Attackers can plant this flag by compiling malicious code on the victim’s machine, purposely creating false debugging paths and changing compiler environment traces.
Spam: Just as spammers use giant botnets such as Necrus and Kelihos, RDP access is popular among a subset of spammers. Some of the systems we found for sale are actively promoted for mass-mailing campaigns, and almost all the shops offer a free blacklist check, to see if the systems were flagged by SpamHaus and other antispam organizations.
Account abuse, credential harvesting, and extortion: By accessing a system via RDP, attackers can obtain almost all data stored on a system. This information can be used for identity theft, account takeovers, credit card fraud, and extortion, etc.
Cryptomining: In the latest McAfee Labs Threats Report, we wrote about the increase in illegal cryptocurrency mining due to the rising market value of digital currencies. We found several criminal forums actively advertising Monero mining as a use for compromised RDP machines.
Monero mining via RDP advertised on a cybercriminal forum.
Ransomware: The large majority of ransomware is still spread by phishing emails and exploit kits. However, specialized criminal groups such as SamSam are known to use RDP to easily enter their victims’ networks almost undetected.
RDP shop overview
Systems for sale: The advertised systems ranged from Windows XP through Windows 10. Windows 2008 and 2012 Server were the most abundant systems, with around 11,000 and 6,500, respectively, for sale. Prices ranged from around US $3 for a simple configuration to $19 for a high-bandwidth system that offered access with administrator rights.
Third-party resellers: When comparing “stock” among several RDP shops, we found that the same RDP machines were sold at different shops, indicating that these shops act as resellers.
Windows Embedded Standard: Windows Embedded Standard, now called Windows IOT, is used in a wide variety of systems that require a small footprint. These systems can range from thin clients to hotel kiosk systems, announcement boards, point-of-sale (POS) systems, and even parking meters among others.
Among the thousands of RDP-access systems offered, some configurations stood out. We found hundreds of identically configured Windows Embedded Standard machines for sale at UAS Shop and BlackPass; all these machines were in the Netherlands. This configuration was equipped with a 1-GHz VIA Eden processor. An open-source search of this configuration revealed that it is most commonly used in thin clients and some POS systems. The configurations are associated with several municipalities, housing associations, and health care institutions in the Netherlands.
Thin client and POS systems are often overlooked and not commonly updated, making them an ideal backdoor target for an attacker. Although these systems have a small physical footprint, the business impact of having such a system compromised should not be underestimated. As we’ve observed from previous breaching of retailers leveraging unpatched or vulnerable POS systems, the damage extends far beyond financial only, including customer perception and long-term brand reputation. In regard to the current affected systems we discovered, McAfee has notified the identified victims and is working to learn further detail on why and how these identical Windows systems were compromised.
Government and health care institutions: We also came across multiple government systems being sold worldwide, including those linked to the United States, and dozens of connections linked to health care institutions, from hospitals and nursing homes to suppliers of medical equipment. In a March blog post, the Advanced Threat Research team showed the possible consequences of ill-secured medical data and what can happen when an attacker gains access to medical systems. It is very troublesome to see that RDP shops offer an easy way in.
Additional products for sale
Services offered by our researched RDP shops.
In addition to selling RDP, some of these shops offer a lively trade in social security numbers, credit card data, and logins to online shops. The second-largest RDP shop we researched, BlackPass, offered the widest variety of products. The most prolific of these brokers provide one-stop access to all the tools used to commit fraud: RDP access into computers, social security numbers and other integral data to set up loans or open bank accounts.
For legal and ethical reasons, we did not purchase any of the products offered. Therefore, we cannot determine the quality of the services.
RDP ransomware attack scenario
Is it possible to find a high-value victim using an RDP shop? The Advanced Threat Research team put this theory to the test. By leveraging the vast amounts of connections offered by the RDP shops, we were able to quickly identify a victim that fits the profile of a high-value target in the United States.
We found a newly posted (on April 16) Windows Server 2008 R2 Standard machine on the UAS Shop. According to the shop details, it belonged to a city in the United States and for a mere $10 we could get administrator rights to this system.
RDP access offered for sale.
UAS Shop hides the last two octets the of the IP addresses of the systems it offers for sale and charges a small fee for the complete address. (We did not pay for any services offered by UAS or any other shop.) To locate the system being sold, we used shodan.io to search for any open RDP ports at that specific organization using this query:
org:”City XXX” port:”3389”
The results were far more alarming than we anticipated. The Shodan search narrowed 65,536 possible IPs to just three that matched our query. By obtaining a complete IP address we could now look up the WHOIS information, which revealed that all the addresses belonged to a major International airport. This is definitely not something you want to discover on a Russian underground RDP shop, but the story gets worse.
From bad to worse
Two of the IP addresses presented a screenshot of the accessible login screens.
A login screen that matches the configuration offered in the RDP shop.
A closer look at the screenshots shows that the Windows configuration (preceding screen) is identical to the system offered in the RDP shop. There are three user accounts available on this system, one of which is the administrator account. The names of the other accounts seemed unimportant at first but after performing several open-source searches we found that the accounts were associated with two companies specializing in airport security; one in security and building automation, the other in camera surveillance and video analytics. We did not explore the full level of access of these accounts, but a compromise could offer a great foothold and lateral movement through the network using tools such as Mimikatz.
The login screen of a second system on the same network.
Looking at the other login account (preceding screen), we saw it is part of the domain with a very specific abbreviation. We performed the same kind of search on the other login account and found the domain is most likely associated with the airport’s automated transit system, the passenger transport system that connects terminals. It is troublesome that a system with such significant public impact might be openly accessible from the Internet.
Now we know that attackers, like the SamSam group, can indeed use an RDP shop to gain access to a potential high-value ransomware victim. We found that access to a system associated with a major international airport can be bought for only $10—with no zero-day exploit, elaborate phishing campaign, or watering hole attack.
To publish our findings, we have anonymized the data to prevent any disclosure of sensitive security information.
Basic forensic and security advice
Playing hide and seek
Besides selling countless connections, RDP shops offer tips on how to remain undetected when an attacker wants to use the freshly bought RDP access.
This screen from the UAS Shop’s FAQ section explains how to add several registry keys to hide user accounts.
The UAS Shop offers a zip file with a patch to allow multiuser RDP access, although it is not possible by default on some Windows versions. The zip file contains two .reg files that alter the Windows registry and a patch file that alters termsvrl.dll to allow concurrent remote desktop connections.
These alterations to the registry and files leave obvious traces on a system. Those indicators can be helpful when investigating misuse of RDP access.
In addition to checking for these signs, it is good practice to check the Windows event and security logs for unusual logon types and RDP use. The following screen, from the well-known SANS Digital Forensics and Incident Response poster, explains where the logs can be found.
Outside access to a network can be necessary, but it always comes with risk. We have summarized some basic RDP security measures:
Using complex passwords and two-factor authentication will make brute-force RDP attacks harder to succeed
Do not allow RDP connections over the open Internet
Lock out users and block or timeout IPs that have too many failed login attempts
Regularly check event logs for unusual login attempts
Consider using an account-naming convention that does not reveal organizational information
Enumerate all systems on the network and list how they are connected and through which protocols. This also applies for Internet of Things and POS systems.
Remotely accessing systems is essential for system administrators to perform their duties. Yet they must take the time to set up remote access in a way that is secure and not easily exploitable. RPD shops are stockpiling addresses of vulnerable machines and have reduced the effort of selecting victims by hackers to a simple online purchase.
Governments and organizations spend billions of dollars every year to secure the computer systems we trust. But even a state-of-the-art solution cannot provide security when the backdoor is left open or carries only a simple padlock. Just as we check the doors and windows when we leave our homes, organizations must regularly check which services are accessible from the outside and how they are secured. Protecting systems requires an integrated approach of defense in depth and proactive attitudes from every employee.