man.bsd.lv manual page server

Manual Page Search Parameters

NETGRAPH(4) Device Drivers Manual NETGRAPH(4)

netgraphgraph based kernel networking subsystem

The netgraph system provides a uniform and modular system for the implementation of kernel objects which perform various networking functions. The objects, known as , can be arranged into arbitrarily complicated graphs. Nodes have hooks which are used to connect two nodes together, forming the edges in the graph. Nodes communicate along the edges to process data, implement protocols, etc.

The aim of netgraph is to supplement rather than replace the existing kernel networking infrastructure. It provides:

The most fundamental concept in netgraph is that of a . All nodes implement a number of predefined methods which allow them to interact with other nodes in a well defined manner.

Each node has a type, which is a static property of the node determined at node creation time. A node's type is described by a unique ASCII type name. The type implies what the node does and how it may be connected to other nodes.

In object-oriented language, types are classes and nodes are instances of their respective class. All node types are subclasses of the generic node type, and hence inherit certain common functionality and capabilities (e.g., the ability to have an ASCII name).

Nodes may be assigned a globally unique ASCII name which can be used to refer to the node. The name must not contain the characters “.” or “:” and is limited to NG_NODESIZ characters (including NUL byte).

Each node instance has a unique which is expressed as a 32-bit hex value. This value may be used to refer to a node when there is no ASCII name assigned to it.

Nodes are connected to other nodes by connecting a pair of hooks, one from each node. Data flows bidirectionally between nodes along connected pairs of hooks. A node may have as many hooks as it needs, and may assign whatever meaning it wants to a hook.

Hooks have these properties:

A node may decide to assign special meaning to some hooks. For example, connecting to the hook named “debug” might trigger the node to start sending debugging information to that hook.

Two types of information flow between nodes: data messages and control messages. Data messages are passed in mbuf chains along the edges in the graph, one edge at a time. The first mbuf in a chain must have the M_PKTHDR flag set. Each node decides how to handle data coming in on its hooks.

Control messages are type-specific C structures sent from one node directly to some arbitrary other node. Control messages have a common header format, followed by type-specific data, and are binary structures for efficiency. However, node types also may support conversion of the type specific data between binary and ASCII for debugging and human interface purposes (see the NGM_ASCII2BINARY and NGM_BINARY2ASCII generic control messages below). Nodes are not required to support these conversions.

There are two ways to address a control message. If there is a sequence of edges connecting the two nodes, the message may be “source routed” by specifying the corresponding sequence of hooks as the destination address for the message (relative addressing). Otherwise, the recipient node global ASCII name (or equivalent ID based name) is used as the destination address for the message (absolute addressing). The two types of addressing may be combined, by specifying an absolute start node and a sequence of hooks.

Messages often represent commands that are followed by a reply message in the reverse direction. To facilitate this, the recipient of a control message is supplied with a “return address” that is suitable for addressing a reply.

Each control message contains a 32 bit value called a typecookie indicating the type of the message, i.e. how to interpret it. Typically each type defines a unique typecookie for the messages that it understands. However, a node may choose to recognize and implement more than one type of message.

In order to minimize latency, most netgraph operations are functional. That is, data and control messages are delivered by making function calls rather than by using queues and mailboxes. For example, if node A wishes to send a data mbuf to neighboring node B, it calls the generic netgraph data delivery function. This function in turn locates node B and calls B's “receive data” method. While this mode of operation results in good performance, it has a few implications for node developers:

So far, these issues have not proven problematical in practice.

A node may have a hidden interaction with other components of the kernel outside of the netgraph subsystem, such as device hardware, kernel protocol stacks, etc. In fact, one of the benefits of netgraph is the ability to join disparate kernel networking entities together in a consistent communication framework.

An example is the node type which is both a netgraph node and a socket(2) BSD socket in the protocol family PF_NETGRAPH. Socket nodes allow user processes to participate in netgraph. Other nodes communicate with socket nodes using the usual methods, and the node hides the fact that it is also passing information to and from a cooperating user process.

Another example is a device driver that presents a node interface to the hardware.

Nodes are notified of the following actions via function calls to the following node methods (all from inside critical sections) and may accept or reject that action (by returning the appropriate error code):

Creation of a new node
The constructor for the type is called. If creation of a new node is allowed, the constructor must call the generic node creation function (in object-oriented terms, the superclass constructor) and then allocate any special resources it needs. For nodes that correspond to hardware, this is typically done during the device attach routine. Often a global ASCII name corresponding to the device name is assigned here as well.
Creation of a new hook
The hook is created and tentatively linked to the node, and the node is told about the name that will be used to describe this hook. The node sets up any special data structures it needs, or may reject the connection, based on the name of the hook.
Successful connection of two hooks
After both ends have accepted their hooks, and the links have been made, the nodes get a chance to find out who their peer is across the link and can then decide to reject the connection. Tear-down is automatic.
Destruction of a hook
The node is notified of a broken connection. The node may consider some hooks to be critical to operation and others to be expendable: the disconnection of one hook may be an acceptable event while for another it may affect a total shutdown for the node.
Shutdown of a node
This method allows a node to clean up and to ensure that any actions that need to be performed at this time are taken. The method must call the generic (i.e. superclass) node destructor to get rid of the generic components of the node. Some nodes (usually associated with a piece of hardware) may be in that a shutdown breaks all edges and resets the node, but doesn't remove it, in which case the generic destructor is not called.

Three other methods are also supported by all nodes:

Receive data message
An mbuf chain is passed to the node. The node is notified on which hook the data arrived, and can use this information in its processing decision. The node must always () the mbuf chain on completion or error, or pass it on to another node (or kernel module) which will then be responsible for freeing it.

In addition to the mbuf chain itself there is also a pointer to a structure describing meta-data about the message (e.g. priority information). This pointer may be NULL if there is no additional information. The format for this information is described in <netgraph/netgraph.h>. The memory for meta-data must allocated via () with type M_NETGRAPH. As with the data itself, it is the receiver's responsibility to () the meta-data. If the mbuf chain is freed the meta-data must be freed at the same time. If the meta-data is freed but the real data on is passed on, then a NULL pointer must be substituted.

The receiving node may decide to defer the data by queueing it in the netgraph NETISR system (see below).

The structure and use of meta-data is still experimental, but is presently used in frame-relay to indicate that management packets should be queued for transmission at a higher priority than data packets. This is required for conformance with Frame Relay standards.

Receive queued data message
Usually this will be the same function as Receive data message. This is the entry point called when a data message is being handed to the node after having been queued in the NETISR system. This allows a node to decide in the Receive data message method that a message should be deferred and queued, and be sure that when it is processed from the queue, it will not be queued again.
Receive control message
This method is called when a control message is addressed to the node. A return address is always supplied, giving the address of the node that originated the message so a reply message can be sent anytime later.

It is possible for a synchronous reply to be made, and in fact this is more common in practice. This is done by setting a pointer (supplied as an extra function parameter) to point to the reply. Then when the control message delivery function returns, the caller can check if this pointer has been made non-NULL, and if so then it points to the reply message allocated via () and containing the synchronous response. In both directions, (request and response) it is up to the receiver of that message to () the control message buffer. All control messages and replies are allocated with malloc() type M_NETGRAPH.

Much use has been made of reference counts, so that nodes being free'd of all references are automatically freed, and this behaviour has been tested and debugged to present a consistent and trustworthy framework for the “type module” writer to use.

The netgraph framework provides an unambiguous and simple to use method of specifically addressing any single node in the graph. The naming of a node is independent of its type, in that another node, or external component need not know anything about the node's type in order to address it so as to send it a generic message type. Node and hook names should be chosen so as to make addresses meaningful.

Addresses are either absolute or relative. An absolute address begins with a node name (or ID), followed by a colon, followed by a sequence of hook names separated by periods. This addresses the node reached by starting at the named node and following the specified sequence of hooks. A relative address includes only the sequence of hook names, implicitly starting hook traversal at the local node.

There are a couple of special possibilities for the node name. The name “.” (referred to as “.:”) always refers to the local node. Also, nodes that have no global name may be addressed by their ID numbers, by enclosing the hex representation of the ID number within square brackets. Here are some examples of valid netgraph addresses:

  .:
  foo:
  .:hook1
  foo:hook1.hook2
  [f057cd80]:hook1

Consider the following set of nodes might be created for a site with a single physical frame relay line having two active logical DLCI channels, with RFC 1490 frames on DLCI 16 and PPP frames over DLCI 20:

[type SYNC ]                  [type FRAME]                 [type RFC1490]
[ "Frame1" ](uplink)<-->(data)[<un-named>](dlci16)<-->(mux)[<un-named>  ]
[    A     ]                  [    B     ](dlci20)<---+    [     C      ]
                                                      |
                                                      |      [ type PPP ]
                                                      +>(mux)[<un-named>]
                                                             [    D     ]

One could always send a control message to node C from anywhere by using the name . Similarly, could reliably be used to reach node D, and node A could refer to node B as , or simply . Conversely, B can refer to A as data. The address could be used by both nodes C and D to address a message to node A.

Note that this is only for . Data messages are routed one hop at a time, by specifying the departing hook, with each node making the next routing decision. So when B receives a frame on hook data it decodes the frame relay header to determine the DLCI, and then forwards the unwrapped frame to either C or D.

Interesting members of the node and hook structures are shown below:

struct  ng_node {
  char    *name;                /* Optional globally unique name */
  void    *private;             /* Node implementation private info */
  struct  ng_type *type;        /* The type of this node */
  int     refs;                 /* Number of references to this struct */
  int     numhooks;             /* Number of connected hooks */
  hook_p  hooks;                /* Linked list of (connected) hooks */
};
typedef struct ng_node *node_p;

struct  ng_hook {
  char           *name;         /* This node's name for this hook */
  void           *private;      /* Node implementation private info */
  int            refs;          /* Number of references to this struct */
  struct ng_node *node;         /* The node this hook is attached to */
  struct ng_hook *peer;         /* The other hook in this connected pair */
  struct ng_hook *next;         /* Next in list of hooks for this node */
};
typedef struct ng_hook *hook_p;

The maintenance of the name pointers, reference counts, and linked list of hooks for each node is handled automatically by the netgraph subsystem. Typically a node's private info contains a back-pointer to the node or hook structure, which counts as a new reference that must be registered by incrementing node->refs.

From a hook you can obtain the corresponding node, and from a node the list of all active hooks.

Node types are described by these structures:

/** How to convert a control message from binary <-> ASCII */
struct ng_cmdlist {
  u_int32_t                  cookie;     /* typecookie */
  int                        cmd;        /* command number */
  const char                 *name;      /* command name */
  const struct ng_parse_type *mesgType;  /* args if !NGF_RESP */
  const struct ng_parse_type *respType;  /* args if NGF_RESP */
};

struct ng_type {
  u_int32_t version;                    /* Must equal NG_VERSION */
  const  char *name;                    /* Unique type name */

  /* Module event handler */
  modeventhand_t  mod_event;            /* Handle load/unload (optional) */

  /* Constructor */
  int    (*constructor)(node_p *node);  /* Create a new node */

  /** Methods using the node **/
  int    (*rcvmsg)(node_p node,         /* Receive control message */
            struct ng_mesg *msg,                /* The message */
            const char *retaddr,                /* Return address */
            struct ng_mesg **resp);             /* Synchronous response */
  int    (*shutdown)(node_p node);      /* Shutdown this node */
  int    (*newhook)(node_p node,        /* create a new hook */
            hook_p hook,                        /* Pre-allocated struct */
            const char *name);                  /* Name for new hook */

  /** Methods using the hook **/
  int    (*connect)(hook_p hook);       /* Confirm new hook attachment */
  int    (*rcvdata)(hook_p hook,        /* Receive data on a hook */
            struct mbuf *m,                     /* The data in an mbuf */
            meta_p meta);                       /* Meta-data, if any */
  int    (*disconnect)(hook_p hook);    /* Notify disconnection of hook */

  /** How to convert control messages binary <-> ASCII */
  const struct ng_cmdlist *cmdlist; 	/* Optional; may be NULL */
};

Control messages have the following structure:

#define NG_CMDSTRSIZ    16      /* Max command string (including null) */

struct ng_mesg {
  struct ng_msghdr {
    u_char      version;        /* Must equal NG_VERSION */
    u_char      spare;          /* Pad to 2 bytes */
    u_short     arglen;         /* Length of cmd/resp data */
    u_long      flags;          /* Message status flags */
    u_long      token;          /* Reply should have the same token */
    u_long      typecookie;     /* Node type understanding this message */
    u_long      cmd;            /* Command identifier */
    u_char      cmdstr[NG_CMDSTRSIZ]; /* Cmd string (for debug) */
  } header;
  char  data[0];                /* Start of cmd/resp data */
};

#define NG_VERSION      1               /* Netgraph version */
#define NGF_ORIG        0x0000          /* Command */
#define NGF_RESP        0x0001          /* Response */

Control messages have the fixed header shown above, followed by a variable length data section which depends on the type cookie and the command. Each field is explained below:

Indicates the version of netgraph itself. The current version is NG_VERSION.
This is the length of any extra arguments, which begin at data.
Indicates whether this is a command or a response control message.
The token is a means by which a sender can match a reply message to the corresponding command message; the reply always has the same token.
The corresponding node type's unique 32-bit value. If a node doesn't recognize the type cookie it must reject the message by returning EINVAL.

Each type should have an include file that defines the commands, argument format, and cookie for its own messages. The typecookie insures that the same header file was included by both sender and receiver; when an incompatible change in the header file is made, the typecookie be changed. The de facto method for generating unique type cookies is to take the seconds from the epoch at the time the header file is written (i.e. the output of date -u +'%s').

There is a predefined typecookie NGM_GENERIC_COOKIE for the “generic” node type, and a corresponding set of generic messages which all nodes understand. The handling of these messages is automatic.

The identifier for the message command. This is type specific, and is defined in the same header file as the typecookie.
Room for a short human readable version of “command” (for debugging purposes only).

Some modules may choose to implement messages from more than one of the header files and thus recognize more than one type cookie.

Control messages are in binary format for efficiency. However, for debugging and human interface purposes, and if the node type supports it, control messages may be converted to and from an equivalent ASCII form. The ASCII form is similar to the binary form, with two exceptions:

o
The cmdstr header field must contain the ASCII name of the command, corresponding to the cmd header field.
o
The args field contains a NUL-terminated ASCII string version of the message arguments.

In general, the arguments field of a control message can be any arbitrary C data type. Netgraph includes parsing routines to support some pre-defined datatypes in ASCII with this simple syntax:

o
Integer types are represented by base 8, 10, or 16 numbers.
o
Strings are enclosed in double quotes and respect the normal C language backslash escapes.
o
IP addresses have the obvious form.
o
Arrays are enclosed in square brackets, with the elements listed consecutively starting at index zero. An element may have an optional index and equals sign preceding it. Whenever an element does not have an explicit index, the index is implicitly the previous element's index plus one.
o
Structures are enclosed in curly braces, and each field is specified in the form “fieldname=value”.
o
Any array element or structure field whose value is equal to its “default value” may be omitted. For integer types, the default value is usually zero; for string types, the empty string.
o
Array elements and structure fields may be specified in any order.

Each node type may define its own arbitrary types by providing the necessary routines to parse and unparse. ASCII forms defined for a specific node type are documented in the documentation for that node type.

There are a number of standard predefined messages that will work for any node, as they are supported directly by the framework itself. These are defined in <netgraph/ng_message.h> along with the basic layout of messages and other similar information.

Connect to another node, using the supplied hook names on either end.
Construct a node of the given type and then connect to it using the supplied hook names.
The target node should disconnect from all its neighbours and shut down. Persistent nodes such as those representing physical hardware might not disappear from the node namespace, but only reset themselves. The node must disconnect all of its hooks. This may result in neighbors shutting themselves down, and possibly a cascading shutdown of the entire connected graph.
Assign a name to a node. Nodes can exist without having a name, and this is the default for nodes created using the NGM_MKPEER method. Such nodes can only be addressed relatively or by their ID number.
Ask the node to break a hook connection to one of its neighbours. Both nodes will have their “disconnect” method invoked. Either node may elect to totally shut down as a result.
Asks the target node to describe itself. The four returned fields are the node name (if named), the node type, the node ID and the number of hooks attached. The ID is an internal number unique to that node.
This returns the information given by NGM_NODEINFO, but in addition includes an array of fields describing each link, and the description for the node at the far end of that link.
This returns an array of node descriptions (as for NGM_NODEINFO) where each entry of the array describes a named node. All named nodes will be described.
This is the same as NGM_LISTNAMES except that all nodes are listed regardless of whether they have a name or not.
This returns a list of all currently installed netgraph types.
The node may return a text formatted status message. The status information is determined entirely by the node type. It is the only "generic" message that requires any support within the node itself and as such the node may elect to not support this message. The text response must be less than NG_TEXTRESPONSE bytes in length (presently 1024). This can be used to return general status information in human readable form.
This message converts a binary control message to its ASCII form. The entire control message to be converted is contained within the arguments field of the NGM_BINARY2ASCII message itself. If successful, the reply will contain the same control message in ASCII form. A node will typically only know how to translate messages that it itself understands, so the target node of the NGM_BINARY2ASCII is often the same node that would actually receive that message.
The opposite of NGM_BINARY2ASCII. The entire control message to be converted, in ASCII form, is contained in the arguments section of the NGM_ASCII2BINARY and need only have the flags, cmdstr, and arglen header fields filled in, plus the NUL-terminated string version of the arguments in the arguments field. If successful, the reply contains the binary version of the control message.

Data moving through the netgraph system can be accompanied by meta-data that describes some aspect of that data. The form of the meta-data is a fixed header, which contains enough information for most uses, and can optionally be supplemented by trailing structures, which contain a (see the section on control messages), an identifier, a length and optional data. If a node does not recognize the cookie associated with an option, it should ignore that option.

Meta data might include such things as priority, discard eligibility, or special processing requirements. It might also mark a packet for debug status, etc. The use of meta-data is still experimental.

The base netgraph code may either be statically compiled into the kernel or else loaded dynamically as a KLD via kldload(8). In the former case, include

options NETGRAPH

in your kernel configuration file. You may also include selected node types in the kernel compilation, for example:

options NETGRAPH
options NETGRAPH_SOCKET
options NETGRAPH_ECHO

Once the netgraph subsystem is loaded, individual node types may be loaded at any time as KLD modules via kldload(8). Moreover, netgraph knows how to automatically do this; when a request to create a new node of unknown type type is made, netgraph will attempt to load the KLD module ng_type.ko.

Types can also be installed at boot time, as certain device drivers may want to export each instance of the device as a netgraph node.

In general, new types can be installed at any time from within the kernel by calling (), supplying a pointer to the type's struct ng_type structure.

The () macro automates this process by using a linker set.

Several node types currently exist. Each is fully documented in its own man page:

SOCKET
The socket type implements two new sockets in the new protocol domain PF_NETGRAPH. The new sockets protocols are NG_DATA and NG_CONTROL, both of type SOCK_DGRAM. Typically one of each is associated with a socket node. When both sockets have closed, the node will shut down. The NG_DATA socket is used for sending and receiving data, while the NG_CONTROL socket is used for sending and receiving control messages. Data and control messages are passed using the sendto(2) and recvfrom(2) calls, using a struct sockaddr_ng socket address.
HOLE
Responds only to generic messages and is a “black hole” for data, Useful for testing. Always accepts new hooks.
ECHO
Responds only to generic messages and always echoes data back through the hook from which it arrived. Returns any non generic messages as their own response. Useful for testing. Always accepts new hooks.
TEE
This node is useful for “snooping”. It has 4 hooks: left, right, left2right, and right2left. Data entering from the right is passed to the left and duplicated on right2left, and data entering from the left is passed to the right and duplicated on left2right. Data entering from left2right is sent to the right and data from right2left to left.
RFC1490 MUX
Encapsulates/de-encapsulates frames encoded according to RFC 1490. Has a hook for the encapsulated packets (“downstream”) and one hook for each protocol (i.e. IP, PPP, etc.).
FRAME RELAY MUX
Encapsulates/de-encapsulates Frame Relay frames. Has a hook for the encapsulated packets (“downstream”) and one hook for each DLCI.
FRAME RELAY LMI
Automatically handles frame relay “LMI” (link management interface) operations and packets. Automatically probes and detects which of several LMI standards is in use at the exchange.
TTY
This node is also a line discipline. It simply converts between mbuf frames and sequential serial data, allowing a tty to appear as a netgraph node. It has a programmable “hotkey” character.
ASYNC
This node encapsulates and de-encapsulates asynchronous frames according to RFC 1662. This is used in conjunction with the TTY node type for supporting PPP links over asynchronous serial lines.
INTERFACE
This node is also a system networking interface. It has hooks representing each protocol family (IP, AppleTalk, etc.) and appears in the output of ifconfig(8). The interfaces are named , , etc.

Whether a named node exists can be checked by trying to send a control message to it (e.g., NGM_NODEINFO). If it does not exist, ENOENT will be returned.

All data messages are mbuf chains with the M_PKTHDR flag set.

Nodes are responsible for freeing what they allocate. There are three exceptions:

1
Mbufs sent across a data link are never to be freed by the sender.
2
Any meta-data information traveling with the data has the same restriction. It might be freed by any node the data passes through, and a NULL passed onwards, but the caller will never free it. Two macros (meta) and (m, meta) should be used if possible to free data and meta data (see <netgraph/netgraph.h>).
3
Messages sent using () are freed by the callee. As in the case above, the addresses associated with the message are freed by whatever allocated them so the recipient should copy them if it wants to keep that information.

<netgraph/netgraph.h>
Definitions for use solely within the kernel by netgraph nodes.
<netgraph/ng_message.h>
Definitions needed by any file that needs to deal with netgraph messages.
<netgraph/socket/ng_socket.h>
Definitions needed to use netgraph socket type nodes.
<netgraph/{type}/ng_{type}.h>
Definitions needed to use netgraph {type} nodes, including the type cookie definition.
/boot/kernel/netgraph.ko
Netgraph subsystem loadable KLD module.
/boot/kernel/ng_{type}.ko
Loadable KLD module for node type {type}.

There is a library for supporting user-mode programs that wish to interact with the netgraph system. See netgraph(3) for details.

Two user-mode support programs, ngctl(8) and nghook(8), are available to assist manual configuration and debugging.

There are a few useful techniques for debugging new node types. First, implementing new node types in user-mode first makes debugging easier. The node type is also useful for debugging, especially in conjunction with ngctl(8) and nghook(8).

socket(2), netgraph(3), ng_async(4), ng_bpf(4), ng_bridge(4), ng_cisco(4), ng_echo(4), ng_eiface(4), ng_etf(4), ng_ether(4), ng_frame_relay(4), ng_hole(4), ng_iface(4), ng_ksocket(4), ng_l2tp(4), ng_lmi(4), ng_mppc(4), ng_one2many(4), ng_ppp(4), ng_pppoe(4), ng_rfc1490(4), ng_socket(4), ng_tee(4), ng_tty(4), ng_UI(4), ng_vjc(4), ngctl(8), nghook(8)

The netgraph system was designed and first implemented at Whistle Communications, Inc. in a version of FreeBSD 2.2 customized for the Whistle InterJet. It first made its debut in the main tree in FreeBSD 3.4.

Julian Elischer <julian@FreeBSD.org>, with contributions by Archie Cobbs <archie@FreeBSD.org>.

December 12, 2014 DragonFly-5.6.1