The Internet On The Shop Floor: Let's Keep It Safe

Use of the Internet on the shop floor brings many benefits, but they can be offset by concurrent security risks. (2002 Guide To Metalworking On The Internet)

Article From: 5/1/2002 Modern Machine Shop,

Click Image to Enlarge

Traveling down the TCP stack

Fig. 2—In traveling down the TCP stack, user data and several types of headers are linked and encapsulated for Ethernet transmission.

Transmission control protocol

Fig. 1—Transmission control protocol (TCP) adds source and destination addresses and media control to application data and encapsulates this information as a serial bit stream.

The Internet has arrived on the shop floor. For corroboration, just browse any publications about industrial automation. The realization soon hits—not only has it arrived, but it has arrived in force.

In the 2001 Modern Machine Shop Guide To Metalworking On The Internet, James R. Fall, president and CEO of Manufacturing Data Systems Inc. (MDSI) (Ann Arbor, Michigan), wrote the article "What Does It Take To Internet-Enable Machine Tools?" in which he discussed why and how to enable shop floor data collection using a software-based computer numerical control (CNC) with a database at its core. (This article may be found here). The volume of cur­rent interest in this area points indisputably toward a rapid proliferation of Internet con­nectivity in factory applications that was unthinkable less than a decade ago.

The adoption of Ethernet networks in factory applications fuels and coincides with that rapid proliferation. Room abounds to debate whether Ethernet is appropriate for all factory networks, but surely no one would continue to argue that a network technology with office automation roots is not appropriate for the shop floor. Its adoption has been rapid, driven by installed cost advantages over proprietary networks plus easier integration with business systems.

All major providers of industrial networks and automation equipment now support Ethernet connectivity and some form of Internet access (See sidebar at right). The obvious hot trend now is to go the next step by offering World Wide Web connectivity to shopfloor devices.


In the last 3 decades, the semiconductor industry has had the astonishing record of doubling processor computing bandwidth about every 2 to 3 years. Upon this technological foundation, three forces merged to bring us to the current state:

1.Development of the Internet

2.A very strong push by major end users of industrial automation for the adoption of standards-based, open systems

3.The emergence of the Web, with its attend­ant multiplatform, browser-based user interface.

Ethernet and transmission control protocol/Internet protocol (TCP/IP) can be traced to the first driving force. The second brought an abortive journey through manufacturing auto­mation protocol (MAP) prior to the merger of industrial and office networks that is now under way. Today, the third driving force is enabling unprecedented freedom of access to information from any computer, anywhere in the world, so long as it has a browser and access to the Internet. A more detailed exami­nation of these forces and the changes they wrought may offer clues to future evolution.

In 1981, the Internet was born—the product of an effort that the Defense Advanced Research Projects Agency (DARPA) funded to develop a communications system linking isolated de­fense research projects. From that work sprang a scheme for enabling computers at geogra­phically dispersed sites to establish connec­tions over existing telecommunications media and to exchange data packets independently of the computer technology at each node.

The scheme was the first instantiation of TCP/IP and the domain name system (DNS). TCP describes the mechanisms and format for encapsulating data packets into standardized, content-independent bit streams for presenta­tion to the network. IP describes the mech­anisms and format by which a sender discov­ers how to address and deliver a message to a specific receiver. DNS provides hierarchical listings of IP addresses, which are unique to specific network nodes. An IP address is a string of numbers and periods ( that is mapped by a DNS to a user-friendly universal resource locator (URL) name ( Note that these defini­tions are considerably simplified from the more detailed and specific information avail­able on the Internet. (See

Through the late 1980s and early 1990s, Ether­net networks were only one of several compet­ing technologies (primary among them being token ring and token bus) in the race to domi­nate office-environment networks.

Ethernet used carrier sense multiple access/collision detect (CSMA/CD) media access control. CSMA/CD means that when a node wants to send a message, it checks for a car­rier already "on the wire"—to see whether another node is transmitting. If so, the node wanting to send a message waits for a time and checks again. When the wire is no longer in use, the node starts its own carrier and begins to trans­mit. Collision detection comes into play when two nodes try to transmit simultane­ously. In that case, both stop transmitting and wait a randomly generated time, then try again.

The two primary Ethernet competitors used different varieties of token-passing access control. Both token technologies used some form of message priority modification to guarantee access to the network within some maximum time. The competing technologies may have been technically superior to Ether­net but, if so, that made little difference. Ethernet adopted TCP/IP, making remote data connections over the fledgling Internet as easy and transparent as local data connections. That fact, plus Ethernet's low cost for media and interfaces, rapidly prevailed in office environ­ment information technology applications.

Figure 1 depicts TCP. Figure 2 offers a view of the encapsulation of data from the applica­tion to the Ethernet wires.

Another crucial addition to Internet technol­ogy—one that ultimately made the Internet an indispensable tool on virtually every office workstation and home computer—was the browser. Browsers, and the data format stan­dards that enable them, provide a consistent, platform-independent, intuitive and powerful user interface.

Many Internet users are also familiar with some of the higher-layer application protocols that use TCP/IP to access the Internet. These include the Web's hypertext transfer protocol (HTTP); file transfer protocol (FTP); Telnet, which allows users to log on to remote computers; and the simple mail transfer proto­col (SMTP). These and other protocols are often packaged with TCP/IP as a "suite."

In retrospect, the factors that made Ethernet a winner over token ring and token bus for of­fice environments are obvious. Using low-cost media and interface electronics, all the com­puters in a single facility could be connected to transfer files and send messages. Any net­work node could even communicate with external domains without the need for format translation software. Additional functions quickly emerged for enabling remote connec­tivity for launching remote applications, and so forth.

A significant new software development sec­tor emerged, focusing on standardized inter­faces to ease application integration over TCP/IP-based networks. Over a relatively brief period, developers produced standar­dized mechanisms for using browser function­ality to manipulate:

  • Database records
  • Secure connections for Internet-based commerce (secure shell, SSL)
  • Dynamic Web pages where a framework is populated from database records
  • Languages for platform-independent software programming (JAVA, ActiveX)
  • Standardized object interfaces (CORBA, DCOM)
  • And, perhaps most powerful of all, a new format for delivering data—extensible markup language (XML).

Industry groups are developing naming con­ventions for XML data, or "schema," for busi­ness applications. These schema will enable interoperability among business applications without the need for data format translations. (For those who want to know more, detailed explanations for the undefined terms and acronyms presented above may be found at

Ethernet On The Shop Floor

The factors that made Ethernet an easy winner for office environments—low cost, high per­formance, transparent interface to the Internet and increasingly standardized software inter­faces—are equally strong drivers in produc­tion environments. However, factory networks must meet different technical requirements.

Communications among programmable devices in a factory are peer-to-peer in nature whereas information technology networks are usually client/server based. Furthermore, fac­tory networks linking programmable devices predominantly carry synchronization data and pass parameters or short messages. Such data packets are small and must meet often-strin­gent timing requirements because the systems they support run in real time. Failure to meet timing deadlines can have disastrous conse­quences that include risk to human health and safety. The technical term is that real-time systems must be deterministic.

Ethernet, because of its CSMA/CD interface, is not deterministic because there is no mathe­matically rigorous way to guarantee message delivery times. (See "Protecting America's Information Infrastructure" on page 39). Production envi­ronments are also very "dirty" from electrical noise and physical viewpoints. These issues bring into question the reliability of office networks used in an environment for which they were not designed.

These very valid concerns delayed factory adoption of Ethernet until about the mid 1990s, but again, technology evolution prevailed. Fiber-optic cable technology is far less suscep­tible to electrical noise, is readily available, and is relatively easy to use. Driven by technology advances, Ethernet baud rates have increased so that 100 megabaud is now common, and gigabaud Ethernet is emerging. Packet trans­mission time is now several orders of magni­tude quicker than it was a decade ago.

Processor technology advancements have also dramatically boosted the processing band­width of network interface electronics and computer central processing units (CPUs), thereby reducing turnaround time, and with it, message latency. Combine all these advances, and the result is that many of the former bar­riers to factory use of Ethernet no longer exist, even though this technology is still not truly deterministic. (When turnaround and trans­mission times are brief enough, and total net­work traffic is low enough that the probability of meeting timing deadlines is high, then Ethernet use is likely to be acceptable, unless the deadlines in question involve safety.)

The first instantiations of shopfloor, Ethernet-connected, human-interface devices were really Intel-based personal computers (PCs) packaged for industrial applications. These quickly led to the development and implemen­tation of factory intranets. For factory intra­nets, shopfloor human-interface services are usually browser-based and use Microsoft Windows functionality. PC-based control systems and embedded controllers with PC front ends (for example, CNC and robot con­trollers) emerged in parallel with those early implementations. There, Ethernet connectivity had such obvious advantages there was no room for a competing technology.

Innovative PC-based control and some pro­grammable logic controller (PLC) suppliers have adapted Ethernet for use in connecting remote input/output (I/O) systems. Ethernet is now in use (in redundant configurations) for even "mission critical" applications such as distributed control systems (DCS) and super­visory control and data acquisition (SCADA) systems found in chemical plants and petro­leum refineries.

The results are in, and the winner is clear. Except for the most critically demanding applications, the present and future factory network is Ethernet.

From Ethernet To Internet

The presence of Ethernet and TCP/IP in factory applications enables access to that Internet (or intranet) functionality on which we have be­come so dependent, both for our livelihoods and in our homes. By using TCP/IP and DNS services, two nodes anywhere in the world can establish a connection and exchange data. That is, they can unless they are prohibited from doing so by routing rules programmed into one or more of the many routing devices in the routing chain (IP works by using DNS entries to establish a chain of servers that forward data packets from sender to receiver).

That level of connectivity has provided im­mense benefits in the world of business sys­tems, enabling location-independent access to information and integration of disparate sys­tems, with and without human intervention. Similar benefits may be realized from factory implementations as well. However, certain issues must be addressed with caution.

In a production environment, standardized interfaces are different, although they provide about the same function. The programmable devices that provide factory automation and are linked by Ethernet invariably use proprie­tary data formats for their entire data structure.

Major suppliers of programmable devices are few, and they compete fiercely. All provide software specifications, and sometimes soft­ware packages, that encapsulate their proprie­tary formats as a layer above TCP/IP. Thus, a workstation connected to the factory network and using its operator interface or program­ming software can access all data on the entire control system.

In contrast, below the programmable device level, in the world of sensors and actuators, suppliers abound. Of necessity, the automation industry has defined standards for networks and device data structures. These, too, are usually encapsulated for use over the Ethernet.

The end result is that today, factory networks implemented using Ethernet and with a con­nection to the Internet offer transparent access to all the data in all the connected devices—unless, of course, the user controls that access in some way. Likewise, programming access to all programmable devices is also trans­parent unless the user restricts it in some way.

Transparent access to information from the shop floor to the boardroom, and from the customer to the end-of-product-life recycle center, enables many valuable functions. Among them are the following:

  • Design engineering access to customer requirements
  • Manufacturing engineering access to factory resource information
  • Customer access to build, assemble and ship information (à la Dell Computer)
  • Procurement visibility into bill of materi­als (BOM) requirements and component inventories
  • Supervisory access to factory health and daily schedule progress
  • Management access to work in progress (WIP), inventory and productivity information
  • Operator access to equipment training manuals and instructions
  • Maintenance access to equipment maintenance manuals and maintenance, repair, and operations (MRO) status
  • Equipment vendor access to equipment health information and the ability to perform remote diagnostics
  • Product design information available to end-of-product-life disposal decisions.

This list is by no means an exhaustive one. If one statement can be made with absolute cer­tainty, it is that the clever imagination of application developers will outstrip even the most optimistic of forecasts.

The Down Side

To all the advantages and benefits that this new technology offers, however, there is a down side: The very openness on which the Internet relies also presents opportunities for misadventure, both deliberate and accidental.

Links to several examples of deliberate attacks are given on the Gas Technology Institute site at Among the examples are the following. In a well-documented attack in Australia, a dis­gruntled control system integrator penetrated the control system for a wastewater facility and released several million liters of raw sewage into waterways. A hacker (who, as it turned out, actually lived in Israel) penetrated the computer system for MIT's Plasma Science and Fusion Center, and was charged with attacking NASA, the Pentagon, and Harvard, Yale, Cornell and Stanford university systems as well.

Most incidents, likely more than 70 percent, are caused or aided by insiders (see "Can't happen at your site?" by Eric Byres, InTech, at,1162,710,00.html). And most of those incidents are accidental—a manufacturing engineer intending to make a change in the program for PLC "xyz" instead logs into PLC "xzy" with disastrous consequences; an operator acciden­tally deletes a crucial file; and so forth. The primary point here is that factory Ethernet sys­tems are subject to the same vulnerabilities, and perhaps a few more, as those in office en­vironments. The consequences of an unauthor­ized penetration of a factory network can certainly be more serious in terms of human health and safety than penetration of an office network.

The Computer Emergency Response Team (CERT) of Carnegie Mellon University's Software Engineering Institute performs cyber forensics on reported incidents of intrusions and denial-of-service attacks. CERT statistics ( show that from 2000 to 2001, reported inci­dents more than doubled (from 21,756 to 52,658), as did system vulnerabilities (from 1,090 to 2,437).

Users must treat the information security as­pects of new factory Ethernet implementations with care. For existing installations, perform­ing a vulnerability assessment would be good. Independent, sometimes self-directed assessment method­ologies are emerging from national laborato­ries and other government-funded entities. For example, CERT has developed a self-directed assessment methodology ( that begins with the identification of critical information assets and concludes with a risk mitigation strategy and an action list.

What Does The Future Hold?

Could anyone have predicted, a decade ago, the explosive growth of the Internet? Network and computer bandwidth are forecast to con­tinue advancing at about the same pace as in the past decade. On the other hand, with the Internet firmly established, and shop floor use expanding, a safe assumption is that the home and office successes will be applied in ways well suited to factory application. For exam­ple, the Intelligent Maintenance Systems con­sortium—a joint venture of the University of Wisconsin–Milwaukee and the University of Michigan that is funded by the National Science Foundation (NSF)—is developing practical, Internet-based systems that can pre­dict equipment failure before it happens and use the Internet to order repair parts and schedule maintenance. In the future, we can expect much more. . . .

Nevertheless, security concerns will always be an issue because new systems are never per­fect. The people who seek vulnerabilities will always find them. The people who then seek to patch them will do so also. Constant vigi­lance is the key to risk management.

About the author: Tony Haynes is director of manufacturing services at the National Center for Manufacturing Sciences.

Comments are reviewed by moderators before they appear to ensure they meet Modern Machine Shop’s submission guidelines.
blog comments powered by Disqus
Channel Partners
  • Techspex