US20150029914A1 - Energy efficient forwarding in ad-hoc wireless networks - Google Patents

Energy efficient forwarding in ad-hoc wireless networks Download PDF

Info

Publication number
US20150029914A1
US20150029914A1 US12/537,085 US53708509A US2015029914A1 US 20150029914 A1 US20150029914 A1 US 20150029914A1 US 53708509 A US53708509 A US 53708509A US 2015029914 A1 US2015029914 A1 US 2015029914A1
Authority
US
United States
Prior art keywords
node
wireless
message
network
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/537,085
Inventor
Brig Barnum Elliott
David Spencer Pearson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OSO IP LLC
POWER MESH NETWORKS LLC
Raytheon BBN Technologies Corp
III Holdings 1 LLC
Original Assignee
POWER MESH NETWORKS LLC
Raytheon BBN Technologies Corp
III Holdings 1 LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/998,946 external-priority patent/US7020501B1/en
Priority to US12/537,085 priority Critical patent/US20150029914A1/en
Application filed by POWER MESH NETWORKS LLC, Raytheon BBN Technologies Corp, III Holdings 1 LLC filed Critical POWER MESH NETWORKS LLC
Assigned to TRI-COUNTY EXCELSIOR FOUNDATION reassignment TRI-COUNTY EXCELSIOR FOUNDATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AZURE NETWORKS, LLC
Assigned to AZURE NETWORKS, LLC reassignment AZURE NETWORKS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TRI-COUNTY EXCELSIOR FOUNDATION
Assigned to BBN TECHNOLOGIES CORP. reassignment BBN TECHNOLOGIES CORP. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: BBNT SOLUTIONS LLC
Assigned to POWER MESH NETWORKS, LLC reassignment POWER MESH NETWORKS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: STRAGENT, LLC
Assigned to AZURE NETWORKS, LLC reassignment AZURE NETWORKS, LLC CONFIRMATORY ASSIGNMENT Assignors: OSO IP, LLC
Assigned to BALTHER TECHNOLOGIES, LLC reassignment BALTHER TECHNOLOGIES, LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: POWER MESH NETWORKS, LLC
Assigned to OSO IP, LLC reassignment OSO IP, LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: BALTHER TECHNOLOGIES, LLC
Assigned to OSO IP, LLC reassignment OSO IP, LLC CONFIRMATORY ASSIGNMENT Assignors: STRAGENT, LLC
Assigned to AZURE NETWORKS, LLC reassignment AZURE NETWORKS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: POWER MESH NETWORKS, LLC
Assigned to BBNT SOLUTIONS LLC reassignment BBNT SOLUTIONS LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ELLIOTT, BRIG BARNUM
Assigned to STRAGENT, LLC reassignment STRAGENT, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BBN TECHNOLOGIES CORP.
Assigned to STRAGENT, LLC reassignment STRAGENT, LLC NUNC PRO TUNC ASSIGNMENT WITH AN EFFECTIVE DATE OF OCTOBER 17, 2008 Assignors: RAYTHEON BBN TECHNOLOGIES CORP.
Assigned to POWER MESH NETWORKS, LLC reassignment POWER MESH NETWORKS, LLC NUNC PRO TUNC ASSIGNMENT WITH AN EFFECTIVE DATE OF OCTOBER 18, 2008 Assignors: STRAGENT, LLC
Assigned to RAYTHEON BBN TECHNOLOGIES CORP. reassignment RAYTHEON BBN TECHNOLOGIES CORP. NUNC PRO TUNC ASSIGNMENT WITH AN EFFECTIVE DATE OF OCTOBER 16, 2008 Assignors: PEARSON, DAVID SPENCER, ELLIOTT, BRIG BARNUM
Assigned to III HOLDINGS 1, LLC reassignment III HOLDINGS 1, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AZURE NETWORKS, LLC
Priority to US14/538,563 priority patent/US9674858B2/en
Publication of US20150029914A1 publication Critical patent/US20150029914A1/en
Priority to US15/409,055 priority patent/US10588139B2/en
Priority to US16/807,311 priority patent/US10863528B2/en
Priority to US17/114,624 priority patent/US11445523B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W56/00Synchronisation arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/12Wireless traffic scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/20Control channels or signalling for resource management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0209Power saving arrangements in terminal devices
    • H04W52/0212Power saving arrangements in terminal devices managed by the network, e.g. network or access point is master and terminal is slave
    • H04W52/0216Power saving arrangements in terminal devices managed by the network, e.g. network or access point is master and terminal is slave using a pre-established activity schedule, e.g. traffic indication frame
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0209Power saving arrangements in terminal devices
    • H04W52/0212Power saving arrangements in terminal devices managed by the network, e.g. network or access point is master and terminal is slave
    • H04W52/0219Power saving arrangements in terminal devices managed by the network, e.g. network or access point is master and terminal is slave where the power saving management affects multiple terminals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the present invention relates generally to ad-hoc, multi-node wireless networks and, more particularly, to systems and methods for implementing energy efficient data forwarding mechanisms in such networks.
  • Sensor nodes in such networks conduct measurements at distributed locations and relay the measurements, via other sensor nodes in the network, to one or more measurement data collection points.
  • Sensor networks generally, are envisioned as encompassing a large number (N) of sensor nodes (e.g., as many as tens of thousands of sensor nodes), with traffic flowing from the sensor nodes into a much smaller number (K) of measurement data collection points using routing protocols.
  • N large number
  • K number
  • routing protocols conventionally involve the forwarding of routing packets throughout the sensor nodes of the network to distribute the routing information necessary for sensor nodes to relay measurements to an appropriate measurement data collection point.
  • each sensor node of the network operates for extended periods of time on self-contained power supplies (e.g., batteries or fuel cells).
  • self-contained power supplies e.g., batteries or fuel cells.
  • each sensor node must be prepared to receive and forward routing packets at any time.
  • Each sensor node's transmitter and receiver thus, conventionally operates in a continuous fashion to enable the sensor node to receive and forward the routing packets essential for relaying measurements from a measuring sensor node to a measurement data collection point in the network. This continuous operation depletes each node's power supply reserves and, therefore, limits the operational life of each of the sensor nodes.
  • Systems and methods consistent with the present invention address this need and others by providing mechanisms that enable sensor node transmitters and receivers to be turned off, and remain in a “sleep” state, for substantial periods, thus, increasing the energy efficiency of the nodes.
  • Systems and methods consistent with the present invention further implement transmission and reception schedules that permit the reception and forwarding of packets containing routing, or other types of data, during short periods when the sensor node transmitters and receivers are powered up and, thus, “awake.”
  • the present invention thus, increases sensor node operational life by reducing energy consumption while permitting the reception and forwarding of the routing messages needed to self-organize the distributed network.
  • a method of conserving energy in a node in a wireless network includes receiving a first powering-on schedule from another node in the network, and selectively powering-on at least one of a transmitter and receiver based on the received first schedule.
  • a method of conveying messages in a sensor network includes organizing a sensor network into a hierarchy of tiers, transmitting one or more transmit/receive scheduling messages throughout the network, and transmitting and receiving data messages between nodes in adjacent tiers based on the one or more transmit/receive scheduling messages.
  • a method of conserving energy in a multi-node network includes organizing the multi-node network into tiers, producing a transmit/receive schedule at a first tier in the network, and controlling the powering-on and powering-off of transmitters and receivers in nodes in a tier adjacent to the first tier according to the transmit/receive schedule.
  • a method of forwarding messages at a first node in a network includes receiving scheduling messages from a plurality of nodes in the network, selecting one of the plurality of nodes as a parent node, and selectively forwarding data messages to the parent node based on the received scheduling message associated with the selected one of the plurality of nodes.
  • FIG. 1 illustrates an exemplary network consistent with the present invention
  • FIG. 2 illustrates an exemplary sensor network consistent with the present invention
  • FIG. 3 illustrates the exemplary sensor network of FIG. 2 organized into tiers consistent with the present invention
  • FIG. 4 illustrates exemplary components of a sensor node consistent with the present invention
  • FIG. 5 illustrates exemplary components of a monitor point consistent with the present invention
  • FIG. 6A illustrates an exemplary monitor point database consistent with the present invention
  • FIG. 6B illustrates exemplary monitor point affiliation/schedule data stored in the database of FIG. 6A consistent with the present invention
  • FIG. 7A illustrates an exemplary sensor node database consistent with the present invention
  • FIG. 7B illustrates exemplary sensor node affiliation/schedule data stored in the database of FIG. 7A consistent with the present invention
  • FIG. 8 illustrates an exemplary schedule message consistent with the present invention
  • FIG. 9 illustrates exemplary transmit/receive scheduling consistent with the present invention
  • FIGS. 10-11 are flowcharts that illustrate parent/child affiliation processing consistent with the present invention.
  • FIG. 12 is a flowchart that illustrates exemplary monitor point scheduling processing consistent with the present invention.
  • FIGS. 13-16 are flowcharts that illustrate sensor node schedule message processing consistent with the present invention.
  • FIG. 17 illustrates an exemplary message transmission diagram consistent with the present invention.
  • FIG. 18 illustrates exemplary node receiver timing consistent with the present invention.
  • FIG. 19 illustrates exemplary components of a sensor node consistent with the present invention
  • FIG. 20 illustrates exemplary components of a monitor point consistent with the present invention
  • FIG. 21A illustrates a first exemplary database consistent with the present invention
  • FIG. 21B illustrates an exemplary sensor forwarding table stored in the database of FIG. 5A consistent with the present invention
  • FIG. 22A illustrates a second exemplary database consistent with the present invention
  • FIG. 22B illustrates an exemplary monitor point table stored in the database of FIG. 6A consistent with the present invention
  • FIG. 23 illustrates an exemplary monitor point beacon message consistent with the present invention
  • FIGS. 24-25 are flowcharts that illustrate exemplary monitor point beacon message processing consistent with the present invention.
  • FIG. 26 illustrates an exemplary sensor node beacon message consistent with the present invention
  • FIG. 27 is a flowchart that illustrates exemplary sensor node beacon message processing consistent with the present invention.
  • FIGS. 28-31 are flowcharts that illustrate exemplary sensor node forwarding table update processing consistent with the present invention.
  • FIG. 32 illustrates an exemplary sensor datagram consistent with the present invention
  • FIGS. 33-34 are flowcharts that illustrate exemplary sensor node datagram processing consistent with the present invention.
  • FIGS. 35-39 are flowcharts that illustrate exemplary sensor node datagram relay processing consistent with the present invention.
  • FIGS. 40-43 are flowcharts that illustrate exemplary monitor point datagram processing consistent with the present invention.
  • Systems and methods consistent with the present invention provide mechanisms for conserving energy in wireless nodes by transmitting scheduling messages throughout the nodes of the network.
  • the scheduling messages include time schedules for selectively powering-on and powering-off node transmitters and receivers.
  • Message datagrams and routing messages may, thus, be conveyed throughout the network during appropriate transmitter/receiver power-on and power-off intervals.
  • FIG. 1 illustrates an exemplary network 100 , consistent with the present invention.
  • Network 100 may include monitor points 105 a - 105 n connected to sensor network 110 and network 115 via wired 120 , wireless 125 , or optical connection links (not shown).
  • Network 100 may further include one or more servers 130 interconnected with network 115 .
  • Monitor points 105 a - 105 n may include data transceiver units for transmitting messages to, and receiving messages from, one or more sensors of sensor network 110 .
  • Such messages may include routing messages containing network routing data, message datagrams containing sensor measurement data, and schedule messages containing sensor node transmit and receive scheduling data.
  • the routing messages may include identification data for one or more monitor points, and the number of hops to reach each respective identified monitor point, as determined by a sensor node/monitor point that is the source of the routing message.
  • the routing messages may be transmitted as wireless broadcast messages in network 100 . The routing messages, thus, permit sensor nodes to determine a minimum hop path to a monitor point in network 100 .
  • monitor points 105 a - 105 n may operate as “sinks” for sensor measurements made at nearby sensor nodes.
  • Message datagrams may include sensor measurement data that may be transmitted to a monitor point 105 a - 105 n for data collection.
  • Message datagrams may be sent from a monitor point to a sensor node, from a sensor node to a monitor point, or from a sensor node to a sensor node.
  • monitor points 105 a - 105 n may include data transceiver units for transmitting messages to and from one or more sensors of sensor network 110 .
  • Such messages may include beacon messages and message datagrams.
  • Beacon messages may include identification data for one or more monitor points, and the number of hops to reach each respective identified monitor point, as determined by a sensor node/monitor point that is the source of the beacon message.
  • Beacon messages may be transmitted as wireless broadcast messages in network 100 . Beacon messages, thus, permit sensor nodes to determine a minimum hop path to a monitor point in network 100 .
  • monitor points 105 a - 105 n may operate as “sinks” for sensor measurements made at nearby sensor nodes.
  • Message datagrams may be sent from a monitor point to a sensor node, from a sensor node to a monitor point, or from a sensor node to a sensor node.
  • Message datagrams may include path information for transmitting message datagrams, hop by hop, from one node in network 100 to another node in network 100 .
  • Message datagrams may further include sensor measurement data that may be transmitted to a monitor point 105 a - 105 n for data collection.
  • Sensor network 110 may include one or more distributed sensor nodes (not shown) that may organize themselves into an ad-hoc, multi-hop wireless network.
  • Each of the distributed sensor nodes of sensor network 110 may include one or more of any type of conventional sensing device, such as, for example, acoustic sensors, motion-detection sensors, radar sensors, sensors that detect specific chemicals or families of chemicals, sensors that detect nuclear radiation or biological agents, magnetic sensors, electronic emissions signal sensors, thermal sensors, and visual sensors that detect or record still or moving images in the visible or other spectrum.
  • Sensor nodes of sensor network 110 may perform one or more measurements over a sampling period and transmit the measured values via packets, datagrams, cells or the like to monitor points 105 a - 105 n.
  • Network 115 may include one or more networks of any type, including a Public Land Mobile Network (PLMN), Public Switched Telephone Network (PSTN), local area network (LAN), metropolitan area network (MAN), wide area network (WAN), Internet, or Intranet.
  • PLMN Public Land Mobile Network
  • PSTN Public Switched Telephone Network
  • LAN local area network
  • MAN metropolitan area network
  • WAN wide area network
  • Internet or Intranet.
  • the one or more PLMNs may further include packet-switched sub-networks, such as, for example, General Packet Radio Service (GPRS), Cellular Digital Packet Data (CDPD), and Mobile IP sub-networks.
  • GPRS General Packet Radio Service
  • CDPD Cellular Digital Packet Data
  • Server 130 may include a conventional computer, such as a desktop, laptop or the like. Server 130 may collect data, via network 115 , from each monitor point 105 of network 100 and archive the data for future retrieval.
  • FIG. 2 illustrates an exemplary sensor network 110 consistent with the present invention.
  • Sensor network 110 may include one or more sensor nodes 205 a - 205 s that may be distributed across a geographic area.
  • Sensor nodes 205 a - 205 s may communicate with one another, and with one or more monitor points 105 a - 105 n , via wireless or wire-line links (not shown), using, for example, packet-switching mechanisms.
  • sensor nodes 205 a - 205 s may organize themselves into an ad-hoc, multi-hop wireless network through the communication of routing messages and message datagrams.
  • sensor nodes 205 a - 205 s may organize themselves into an ad-hoc, multi-hop wireless network through the communication of beacon messages and message datagrams.
  • Beacon messages may be transmitted as wireless broadcast messages and may include identification data for one or more monitor points, and the number of hops to reach each respective identified monitor point, as determined by a sensor node/monitor point that is the source of the beacon message.
  • Message datagrams may include path information for transmitting the message datagrams, hop by hop, from one node in network 100 to another node in network 100 .
  • Message datagrams may further include sensor measurement data that may be transmitted to a monitor point 105 a - 105 n for data collection.
  • FIG. 3 illustrates sensor network 110 self-organized into tiers using conventional routing protocols, or the routing protocol described in the above-described patent application Ser. No. 09/999,353.
  • messages may be forwarded, hop by hop through the network, from monitor points to sensor nodes, or from individual sensor nodes to monitor points that act as “sinks” for nearby sensor nodes.
  • FIG. 3 illustrates sensor network 110 self-organized into tiers using conventional routing protocols, or the routing protocol described in the above-described patent application Ser. No. 09/999,353.
  • messages may be forwarded, hop by hop through the network, from monitor points to sensor nodes, or from individual sensor nodes to monitor points that act as “sinks” for nearby sensor nodes.
  • FIG. 3 illustrates sensor network 110 self-organized into tiers using conventional routing protocols, or the routing protocol described in the above-described patent application Ser. No. 09/999,353.
  • monitor point MP1 105 a may act as a “sink” for message datagrams from sensor nodes 205 a - 205 e
  • monitor point MP2 105 b may act as a “sink” for message datagrams from sensor nodes 205 f - 205 l
  • monitor point MP3 105 n may act as a “sink” for message datagrams from sensor nodes 205 m - 205 s.
  • monitor point MP1 105 a may reside in MP1 tier 0 305
  • sensor nodes 205 a - 205 c may reside in MP1 tier 1 310
  • sensor nodes 205 d - 205 e may reside in MP1 tier 2 315
  • Monitor point MP2 105 b may reside in MP2 tier 0 320
  • sensor nodes 205 f - 205 h may reside in MP2 tier 1 325
  • sensor nodes 205 i - 205 k may reside in MP2 tier 2 330
  • sensor node 205 l may reside in MP2tier 3 335 .
  • Monitor point MP3 105 n may reside in MP3 tier 0 340
  • sensor nodes 205 m - 205 o may reside in MP3 tier 1 345
  • sensor nodes 205 p - 205 q may reside in MP3 tier 2 350
  • sensor nodes 205 r - 205 s may reside in MP3 tier 3 355 .
  • Each tier shown in FIG. 3 represents an additional hop that data must traverse when traveling from a sensor node to a monitor point, or from a monitor point to a sensor node.
  • At least one node in any tier may act as a “parent” for nodes in the next higher tier (e.g., MP1 Tier 2 315 ).
  • sensor node 205 a acts as a “parent” node for sensor nodes 205 d - 205 e .
  • Sensor nodes 205 d - 205 e may relay all messages through sensor node 205 a to reach monitor point MP1 105 a.
  • FIG. 4 illustrates exemplary components of a sensor node 205 consistent with the present invention.
  • Sensor node 205 may include a transmitter/receiver 405 , an antenna 410 , a processing unit 415 , a memory 420 , an optional output device(s) 425 , an optional input device(s) 430 , one or more sensor units 435 a - 435 n , a clock 440 , and a bus 445 .
  • Transmitter/receiver 405 may connect sensor node 205 to a monitor point 105 or another sensor node.
  • transmitter/receiver 405 may include transmitter and receiver circuitry well known to one skilled in the art for transmitting and/or receiving data bursts via antenna 410 .
  • Processing unit 415 may perform all data processing functions for inputting, outputting and processing of data including data buffering and sensor node control functions.
  • Memory 420 may include random access memory (RAM) and/or read only memory (ROM) that provides permanent, semi-permanent, or temporary working storage of data and instructions for use by processing unit 415 in performing processing functions.
  • Memory 420 may also include large-capacity storage devices, such as magnetic and/or optical recording devices.
  • Output device(s) 425 may include conventional mechanisms for outputting data in video, audio and/or hard copy format.
  • output device(s) 425 may include a conventional display for displaying sensor measurement data.
  • Input device(s) 430 may permit entry of data into sensor node 205 .
  • Input device(s) 430 may include, for example, a touch pad or keyboard.
  • Sensor units 435 a - 435 n may include one or more of any type of conventional sensing device, such as, for example, acoustic sensors, motion-detection sensors, radar sensors, sensors that detect specific chemicals or families of chemicals, sensors that detect nuclear radiation or sensors that detect biological agents such as anthrax. Each sensor unit 435 a - 435 n may perform one or more measurements over a sampling period and transmit the measured values via packets, cells, datagrams, or the like to monitor points 105 a - 105 n .
  • Clock 440 may include conventional circuitry for maintaining a time base to enable the maintenance of a local time at sensor node 205 . Alternatively, sensor node 205 may derive a local time from an external clock signal, such as, for example, a GPS signal, or from an internal clock synchronized to an external time base.
  • Bus 445 may interconnect the various components of sensor node 205 and permit them to communicate with one another.
  • FIG. 19 illustrates exemplary components of a sensor node 205 consistent with the present invention.
  • Sensor node 205 may include a communication interface 1905 , an antenna 1910 , a processing unit 1915 , a memory 1920 , an optional output device(s) 1925 , an optional input device(s) 1930 , an optional geo-location unit 1935 , one or more sensor units 1940 a - 1940 n , and a bus 1945 .
  • Communication interface 1905 may connect sensor node 205 to a monitor point 105 or another sensor node.
  • communication interface 1905 may include transceiver circuitry well known to one skilled in the art for transmitting and/or receiving data bursts via antenna 1910 .
  • Processing unit 1915 may perform all data processing functions for inputting, outputting and processing of data including data buffering and sensor node control functions.
  • Memory 1920 provides permanent, semi-permanent, or temporary working storage of data and instructions for use by processing unit 1915 in performing processing functions.
  • Memory 1920 may include large-capacity storage devices, such as magnetic and/or optical recording devices.
  • Output device(s) 1925 may include conventional mechanisms for outputting data in video, audio and/or hard copy format.
  • output device(s) 1925 may include a conventional display for displaying sensor measurement data.
  • Input device(s) 1930 may permit entry of data into sensor node 205 .
  • Input device(s) 1930 may include, for example, a touch pad or keyboard.
  • Geo-location unit 1935 may include a conventional device for determining a geo-location of sensor node 205 .
  • geo-location unit 1935 may include a Global Positioning System (GPS) receiver that can receive GPS signals and can determine corresponding geo-locations in accordance with conventional techniques.
  • GPS Global Positioning System
  • Sensor units 1940 a - 1940 n may include one or more of any type of conventional sensing device, such as, for example, acoustic sensors, motion-detection sensors, radar sensors, sensors that detect specific chemicals or families of chemicals, sensors that detect nuclear radiation or sensors that detect biological agents such as anthrax.
  • Each sensor unit 1940 a - 1940 n may perform one or more measurements over a sampling period and transmit the measured values via packets, cells, datagrams, or the like to monitor points 105 a - 105 n.
  • Bus 1945 may interconnect the various components of sensor node 205 and permit them to communicate with one another.
  • FIG. 5 illustrates exemplary components of a monitor point 105 consistent with the present invention.
  • Monitor point 105 may include a transmitter/receiver 505 , an antenna 510 , a processing unit 515 , a memory 520 , an input device(s) 525 , an output device(s) 530 , network interface(s) 535 , a clock 540 , and a bus 545 .
  • Transmitter/receiver 505 may connect monitor point 105 to another device, such as another monitor point or one or more sensor nodes.
  • transmitter/receiver 505 may include transmitter and receiver circuitry well known to one skilled in the art for transmitting and/or receiving data bursts via antenna 510 .
  • Processing unit 515 may perform all data processing functions for inputting, outputting, and processing of data.
  • Memory 520 may include Random Access Memory (RAM) that provides temporary working storage of data and instructions for use by processing unit 515 in performing processing functions.
  • RAM Random Access Memory
  • Memory 520 may additionally include Read Only Memory (ROM) that provides permanent or semi-permanent storage of data and instructions for use by processing unit 515 .
  • ROM Read Only Memory
  • Memory 520 can also include large-capacity storage devices, such as a magnetic and/or optical device.
  • Input device(s) 525 permits entry of data into monitor point 105 and may include a user interface (not shown).
  • Output device(s) 530 permits the output of data in video, audio, or hard copy format.
  • Network interface(s) 535 interconnects monitor point 105 with network 115 .
  • Clock 540 may include conventional circuitry for maintaining a time base to enable the maintenance of a local time at monitor point 105 .
  • monitor point 105 may derive a local time from an external clock signal, such as, for example, a GPS signal, or from an internal clock synchronized to an external time base.
  • Bus 545 interconnects the various components of monitor point 105 to permit the components to communicate with one another.
  • FIG. 20 illustrates exemplary components of a monitor point 105 consistent with the present invention.
  • Monitor point 105 may include a communication interface 2005 , an antenna 2010 , a processing unit 2015 , a memory 2020 , an input device(s) 2025 , an output device 2030 , network interface(s) 2035 and a bus 2040 .
  • Communication interface 2005 may connect monitor point 105 to another device, such as another monitor point or one or more sensor nodes.
  • communication interface 2005 may include transceiver circuitry well known to one skilled in the art for transmitting and/or receiving data bursts via antenna 2010 .
  • Processing unit 2015 may perform all data processing functions for inputting, outputting, and processing of data.
  • Memory 2020 may include Random Access Memory (RAM) that provides temporary working storage of data and instructions for use by processing unit 2015 in performing processing functions.
  • RAM Random Access Memory
  • Memory 2020 may additionally include Read Only Memory (ROM) that provides permanent or semi-permanent storage of data and instructions for use by processing unit 2015 .
  • Memory 2020 can also include large-capacity storage devices, such as a magnetic and/or optical device.
  • Input device(s) 2025 permits entry of data into monitor point 105 and may include a user interface (not shown).
  • Output device(s) 2030 permits the output of data in video, audio, or hard copy format.
  • Network interface(s) 2035 interconnects monitor point 105 with network 115 .
  • Bus 2040 interconnects the various components of monitor point 105 to permit the components to communicate with one another.
  • FIG. 6A illustrates an exemplary database 600 that may be stored in memory 520 of a monitor point 105 .
  • Database 600 may include monitor point affiliation/schedule data 605 that includes identifiers of sensor nodes affiliated with monitor point 105 , and scheduling data indicating times at which monitor point 105 may transmit to, or receive bursts of data from, affiliated sensor nodes.
  • FIG. 6B illustrates exemplary data that may be contained in monitor point affiliation/schedule data 605 .
  • Monitor point affiliation/schedule data 605 may include “affiliated children IDs” data 610 and “Tx/Rx schedule” data 615 .
  • “Tx/Rx schedule” data 615 may further include “parent Tx” 620 data, “child-to-parent Tx” data 625 , and “next tier activity” data 630 .
  • “Affiliated children IDs” data 610 may include unique identifiers of sensor nodes 205 that are affiliated with monitor point 105 and, thus, from which monitor point 105 may receive messages.
  • “Parent Tx” data 620 may include a time at which monitor point 105 may transmit messages to sensor nodes identified by the “affiliated children IDs” data 610 .
  • “Child-to-Parent Tx” data 625 may include times at which sensor nodes identified by “affiliated children IDs” 610 may transmit messages to monitor point 105 .
  • “Next Tier Activity” data 630 may include times at which sensor nodes identified by the “affiliated children IDs” data 610 may transmit messages to, and receive messages from, their affiliated children.
  • FIG. 21A illustrates an exemplary database 2100 that may be stored in memory 2020 of a monitor point 105 .
  • Database 2100 may include a monitor point table 2105 that further includes data regarding sensor nodes 205 in network to which monitor point 105 can transmit message datagrams.
  • FIG. 21B illustrates an exemplary monitor point table 2105 that may include one or more table entries 2110 .
  • Each entry 2110 in monitor point table 2105 may include a number of fields, including a “sensor ID” field 2115 , a “geo-location” field 2120 , a “sensor message” field 2125 , a “# of Nodes” field 2130 , a “1 st Hop” field 2135 a through a “Nth Hop” field 2135 N, and a “Seq #” field 2140 .
  • “Sensor ID” field 2115 may indicate a unique identifier for a sensor node 205 in sensor network 110 .
  • “Geo-location” field 2120 may indicate a geographic location associated with a sensor node 205 identified by a corresponding “sensor ID” field 2115 .
  • “Sensor message” field 2125 may include a message, such as, for example, data from measurements performed at the sensor node 205 identified by a corresponding “sensor ID” field 2115 .
  • “# of Nodes” field 2130 may indicate the number of hops across sensor network 110 to reach the sensor node 205 identified by a corresponding “sensor ID” field 2115 from monitor point 105 .
  • “1 st Hop” field 2135 a through “Nth Hop” field 2135 N may indicate the unique identifier of each sensor node 205 in network 110 that a message datagram must hop to reach the sensor node 205 identified by a corresponding “sensor ID” field 2115 from monitor point 105 .
  • “Seq #” field 2140 may include a startup number, counter number and time stamp sub-fields (not shown) corresponding to sequencing data that may be extracted from received beacon messages (see FIG. 26 below).
  • FIG. 7A illustrates an exemplary database 700 that may be stored in memory 420 of a sensor node 205 .
  • Database 700 may include sensor affiliation/schedule data 705 that may further include data indicating which sensor nodes are affiliated with sensor node 205 and indicating schedules for sensor node 205 to transmit and receive messages.
  • FIG. 7B illustrates exemplary sensor affiliation/schedule data 705 .
  • Sensor affiliation/schedule data 705 may include “designated parent ID” data 710 , “parent's schedule” data 715 , “derived schedule” data 720 , and “affiliated children IDs” data 725 .
  • “Designated parent ID” data 710 may include a unique identifier that identifies the “parent” node, in a lower tier of sensor network 110 , to which sensor node 205 forwards messages.
  • “Parent's schedule” data 715 may further include “parent Tx” data 620 , “child-to-parent Tx” data 625 and “next tier activity” data 630 .
  • “Derived schedule” data 720 may further include “this node Tx” data 730 , “children-to-this node Tx” data 735 , and “this node's next tier activity” data 740 .
  • This node Tx data 730 may indicate a time at which sensor node 205 forwards messages to sensor nodes identified by “affiliated children IDs” data 725 .
  • “Children-to-this node Tx” data 735 may indicate times at which sensor nodes identified by “affiliated children IDs” data 725 may forward messages to sensor node 205 .
  • “This node's next tier activity” 740 may indicate one or more time periods allocated to sensor nodes in the next higher tier for transmitting and receiving messages.
  • FIG. 22A illustrates an exemplary database 2200 that may be stored in memory 1920 of a sensor node 205 .
  • Database 2200 may include a sensor forwarding table 2205 that further includes data for forwarding beacon messages and/or message datagrams received at a sensor node 205 .
  • FIG. 22B illustrates an exemplary sensor forwarding table 2205 that may include one or more table entries 2210 .
  • Each entry 2210 in sensor forwarding table 2205 may include a number of fields, including a “Use?” field 2215 , a “time stamp” field 2220 , a “monitor point ID” field 2225 , a “Seq #” field 2230 , a “next hop” field 2235 , a “# of hops” field 2240 and a “valid” field 2245 .
  • “Use?” field 2215 may identify the “monitor point ID” field 2225 that sensor node 205 will use to identify the monitor point 105 to which it will send all of its message datagrams.
  • the identified monitor point may include the monitor point that has the least number of hops to reach from sensor node 205 .
  • “Time stamp” field 2220 may indicate a time associated with each entry 2210 in sensor forwarding table 2205 .
  • “Monitor point ID” field 2225 may include a unique identifier that identifies a monitor point 105 in network 100 associated with each entry 2210 in forwarding table 2205 .
  • “Seq #” field 2230 may include the sequence number of the most recent beacon message received from the monitor point 105 identified in the corresponding “monitor point ID” field 2225 .
  • “Seq #” field 2230 may further include “startup number,” “counter number,” and “time stamp” sub-fields (not shown).
  • the “startup number” sub-field may include a number indicating how many times the monitor point 105 identified in the corresponding “monitor point ID” field 2225 has been powered up.
  • the “counter number” sub-field may include a number indicating the number of times the monitor point 105 identified in the corresponding “monitor point ID” field 2225 has sent a beacon message.
  • the “time stamp” sub-field may include a time at which the monitor point 105 identified in “monitor point ID” field 2225 sent a beacon message from which the data in the corresponding entry 2210 was derived.
  • Monitor point 105 may derive the time from an external clock signal, such as, for example, a GPS signal, or from an internal clock synchronized to an external time base.
  • “Next hop” field 2235 may include an identifier for a neighboring sensor node from which the sensor node 205 received information concerning the monitor point 105 identified by “monitor point ID” field 2225 .
  • the “# of hops” field 2240 may indicate the number of hops from sensor node 205 to reach the monitor point 105 identified by the corresponding “monitor point ID” field 2225 .
  • “Valid” field 2245 may indicate whether data stored in the fields of the corresponding table entry 2210 should be propagated in beacon messages sent from sensor node 205 .
  • FIG. 8 illustrates an exemplary schedule message 800 that maybe transmitted from a monitor point 105 or sensor node 205 for scheduling message transmit and receive times within sensor network 110 .
  • Schedule message 800 may include a number of data fields, including “transmitting node ID” data 805 , “parent Tx” data 620 , and “next-tier node transmit schedule” data 810 .
  • “Next-tier node transmit schedule” 810 may further include “child-to-parent Tx” data 625 and “next tier activity” data 630 .
  • “Transmitting node ID” data 805 may include a unique identifier of the monitor point 105 or sensor node 205 originating the schedule message 800 .
  • FIG. 9 illustrates exemplary transmit/receive scheduling that may be employed at each sensor node 205 of network 110 according to schedule messages 800 received from “parent” nodes in a lower tier.
  • the first time period shown on the scheduling timeline, Parent Tx time 620 may include the time period allocated by a “parent” node to transmitting messages from the “parent” node to its affiliated children.
  • the time periods “child-to-parent Tx” 625 may include time periods allocated to each affiliated child of a parent node for transmitting messages to the parent node.
  • the receiver of the parent node may be turned on to receive messages from the affiliated children.
  • the “next tier activity” 630 may include time periods allocated to each child of a parent node for transmitting messages to, and receiving messages from, each child's own children nodes. From the time periods allocated to the children of a parent node, each child may construct its own derived schedule. This derived schedule may include a time period, “this node Tx” 730 during which the child node may transmit to its own affiliated children. The derived schedule may further include time periods, “children-to-this node Tx” 735 during which these affiliated children may transmit messages to the parent's child node. The derived schedule may additionally include time periods, designated “this node's next tier activity” 740 , that may be allocated to this node's children so that they may, in turn, construct their own derived schedule for their own affiliated children.
  • FIGS. 10-11 are flowcharts that illustrate exemplary processing, consistent with the present invention, for affiliating “child” sensor nodes 205 with “parent” nodes in a lower tier. Such “parent” nodes may include other sensor nodes 205 in sensor network 110 or monitor points 105 . As one skilled in the art will appreciate, the method exemplified by FIGS. 10 and 11 can be implemented as a sequence of instructions and stored in memory 420 of sensor node 205 for execution by processing unit 415 .
  • An unaffiliated sensor node 205 may begin parent/child affiliation processing by turning on its receiver 405 and continuously listening for schedule message(s) transmitted from a lower tier of sensor network 110 [step 1005 ] ( FIG. 10 ). Sensor node 205 may be unaffiliated with any “parent” node if it has recently been powered on. Sensor node 205 may also be unaffiliated if it has stopped receiving schedule messages from its “parent” node for a specified time period. If one or more schedule messages are received [step 1010 ], unaffiliated sensor node 205 may select a neighboring node to designate as a parent [step 1015 ].
  • sensor node 205 may select a neighboring node whose transmit signal has the greatest strength or the least bit error rate (BER). Sensor node 205 may insert the “transmitting node ID” data 805 from the corresponding schedule message 800 of the selected neighboring node into the “designated parent ID” data 710 of database 700 [step 1020 ]. Sensor node 205 may then update database 700 's “parent's schedule” data 715 with “parent Tx” data 620 , “child-to-parent Tx” data 625 , and “next tier activity” data 630 from the corresponding schedule message 800 of the selected neighboring node [step 1025 ].
  • BER bit error rate
  • Sensor node 205 may determine if any affiliation messages have been received from sensor nodes residing in higher tiers [step 1105 ] ( FIG. 11 ). If so, sensor node 205 may store message node identifiers contained in the affiliation messages in database 700 's “affiliation children IDs” data 725 [step 1110 ]. Sensor node 205 may also transmit an affiliation message to the node identified by “designated parent ID” data 710 in database 700 [step 1115 ]. Sensor node 205 may further determine a derived schedule from the “next tier activity” data 630 in database 700 [step 1120 ] and store in the “derived schedule” data 720 .
  • FIG. 12 is a flowchart that illustrates exemplary processing, consistent with the present invention, for receiving affiliation messages and transmitting schedule messages at a monitor point 105 .
  • the method exemplified by FIG. 12 can be implemented as a sequence of instructions and stored in memory 520 of monitor point 105 for execution by processing unit 515 .
  • Monitor point message processing may begin with a monitor point 105 receiving one or more affiliation messages from neighboring sensor nodes [step 1205 ] ( FIG. 12 ).
  • Monitor point 105 may insert the node identifiers from the received affiliation message(s) into database 600 's “affiliation children IDs” data 610 [step 1210 ].
  • Monitor point 105 may construct the “Tx/Rx schedule” 615 based on the number of affiliated children indicated in “affiliated children IDs” data 610 [step 1215 ].
  • Monitor point 105 may then transmit a schedule message 800 to sensor nodes identified by “affiliated children IDs” data 610 containing monitor point 105 's “transmitting node ID” data 805 , “parent Tx” data 620 , and “next-tier transmit schedule” data 810 [step 1220 ].
  • Schedule message 800 may be transmitted periodically using conventional multiple access mechanisms, such as, for example, Carrier Sense Multiple Access (CSMA).
  • monitor point 105 may determine if acknowledgements (ACKs) have been received from all affiliated children [step 1225 ]. If not, monitor point 105 may re-transmit the schedule message 800 at regular intervals until ACKs are received from all affiliated children [step 1230 ]. In this manner, monitor point 105 coordinates and schedules the power on/off intervals of the sensor nodes that is associated with (i.e., the nodes with which it transmits/receives data from).
  • ACKs acknowledgements
  • FIGS. 13-16 are flowcharts that illustrate exemplary processing, consistent with the present invention, for receiving and/or transmitting messages at a sensor node 205 .
  • the method exemplified by FIGS. 13-16 can be implemented as a sequence of instructions and stored in memory 420 of sensor node 205 for execution by processing unit 415 .
  • the exemplary reception and transmission of messages at a sensor node as illustrated in FIGS. 13-16 is further demonstrated with respect to the exemplary messages transmission diagram illustrated in FIG. 17 .
  • Sensor node 205 (“This node” 1710 of FIG. 17 ) may begin processing by determining if it is the next parent transmit time as indicated by clock 440 and the “parent Tx” data 620 of database 700 [step 1305 ]. If so, sensor node 205 may turn on receiver 405 [step 1310 ] ( FIG. 13 ) and listen for messages transmitted from a parent (see also “Parent Node” 1705 of FIG. 17 ). If no messages are received, sensor node 205 determines if a receive timer has expired [step 1405 ] ( FIG. 14 ). The receive timer may indicate a maximum time period that sensor node 205 (see “This Node” 1710 of FIG. 17 ) may listen for messages before turning off receiver 405 .
  • sensor node 205 may turn off receiver 405 [step 1410 ]. If messages have been received (see “Parent TX” 620 of FIG. 17 ), sensor node 205 may, optionally, transmit an ACK to the parent node that transmitted the messages [step 1320 ]. Sensor node 205 may then turn off receiver 405 [step 1325 ].
  • sensor node 205 may determine if sensor node 205 is the destination of each of the received messages [step 1330 ]. If so, sensor node 205 may process the message [step 1335 ]. If not, sensor node 205 may determine a next hop in sensor network 110 for the message using conventional routing tables, and place the message in a forwarding queue [step 1340 ]. At step 1415 , sensor node 205 may determine if it is time to transmit messages to the parent node as indicated by “child-to-parent Tx” data 625 of database 700 (see “child-to-parent Tx” 625 of FIG. 17 ). If not, sensor node 205 may sleep until clock 440 indicates that it is time to transmit messages to the parent node [step 1420 ].
  • sensor node 205 may turn on transmitter 405 and transmit all messages intended to go to the node indicated by the “designated parent ID” data 710 of database 700 [step 1425 ]. After all messages are transmitted to the parent node, sensor node 205 may turn off transmitter 405 [step 1430 ].
  • Sensor node 205 may create a new derived schedule for it's children identified by “affiliated children IDs” data 725 , based on the “parent's schedule” 715 , and may then store the new derived schedule in the “derived schedule” data 720 of database 700 [step 1435 ].
  • Sensor node 205 may inspect the “this node Tx” data 730 of database 700 to determine if it is time to transmit to the sensor nodes identified by the “affiliated children IDs” data 725 [step 1505 ] ( FIG. 15 ). If so, sensor node 205 may turn on transmitter 405 and transmit messages, including schedule messages, to its children [step 1510 ] (see “This Node Tx” 730 , FIG. 17 ).
  • sensor node 205 may, optionally, determine if an ACK is received [step 1515 ]. If not, sensor node 205 may further, optionally, re-transmit the corresponding message at a regular interval until an ACK is received [step 1520 ]. When all ACKs are received, sensor node 205 may turn off transmitter 405 [step 1525 ]. Sensor node 205 may then determine if it is time for its children to transmit to sensor node 205 as indicated by clock 440 and “children-to-this node Tx” data 735 of database 700 [step 1605 ] ( FIG. 16 ).
  • sensor node 205 may turn on receiver 405 and receive one or messages from the children identified by the “affiliated children IDs” data 725 of database 700 [step 1610 ] (see “Children-to-this Node Tx” 735 , FIG. 17 ). Sensor node 205 may then turn off receiver 405 [step 1615 ] and processing may return to step 1305 ( FIG. 13 ). In this manner, sensor nodes may power on and off their transmitters and receivers at appropriate times to conserve energy, while still performing their intended functions in network 100 .
  • FIG. 18 illustrates exemplary receiver timing when monitor points 105 or sensor nodes 205 of network 100 use internal clocks that may have inherent “clock drift.” “Clock drift” occurs when internal clocks runs faster or slower than the true elapsed time and may be inherent in many types of internal clocks employed in monitor points 105 or sensor nodes 205 . “Clock drift” may be taken into account when scheduling the time at which a node's receiver must be turned on, since both the transmitting node and the receiving node may both have drifting clocks. As shown in FIG. 18 , T nominal 1805 represents the next time at which a receiver must be turned on based on scheduling data contained in the schedule message received from a parent node.
  • a “Rx Drift Window” 1810 exists around this time which represents T nominal plus or minus the “Max Rx Drift” 1815 for this node over the amount of time remaining until T nominal . If the transmitting node has zero clock drift, the receiving node should, thus, wake up at the beginning of its “Rx Drift Window” 1810 .
  • the clock at the transmitting node may also incur clock drift, “Max Tx Drift” 1820 , that must be accounted for at the receiving node when turning on and off the receiver.
  • the receiving node should, thus, turn on its receiver at a local clock time that is “Max Tx Drift” 1820 plus “Max Rx Drift” 1815 before T nominal .
  • the receiving node should also turn off its receiver at a local clock time that is “Max Rx Drift” 1815 plus “Max Tx Drift” 1820 plus a maximum estimated time to receive a packet from the transmitting node (T RX 1825 ).
  • T RX 1825 may include packet transmission time and packet propagation time.
  • FIG. 23 illustrates an exemplary beacon message 2300 that may be transmitted from a monitor point 105 .
  • Beacon message 2300 may include a number of fields, including a “transmitter node ID” field 2305 , a “checksum” field 2310 , a “NUM” field 2315 , a monitor point “D(i) sequence #” field 2320 and a “# of hops to D(i)” field 2325 .
  • Transmitter node ID field 2305 may include a unique identifier that identifies the node in network 100 that is the source of beacon message 2300 .
  • Checksum” field 2310 may include any type of conventional error detection value that can be used to determine the presence of errors or corruption in beacon message 2300 .
  • NUM field 2315 may indicate the number of different monitor points 105 that are described in beacon message 2300 . When beacon message 2300 is sent directly from a monitor point 105 , “NUM” field 2315 can be set to one, indicating that the message describes only a single monitor point.
  • “D(i) Sequence #” field 2320 may include a “startup number” sub-field 2330 , a “counter number” sub-field 2335 , and an optional “time stamp” sub-field 2340 corresponding to the monitor point 105 identified by “transmitter node ID” field 2305 .
  • “Startup number” sub-field 2330 may include a large number of data bits, such as, for example, 32 bits and may be stored in memory 2020 .
  • “Startup number” 2330 may be set to zero when monitor point 105 is initially manufactured. At every power-up of monitor point 105 , processing unit 2015 can read the “startup number” stored in memory 2020 , increment the number, and store the incremented startup number back in memory 2020 . “Startup number” 2330 , thus, maintains a log of how many times monitor point 105 has been powered up.
  • Counter number sub-field 2335 may be set to zero whenever monitor point 105 powers up. Counter number sub-field 2335 may further be incremented by one each time monitor point sends a beacon message 2300 . “Start-up number” sub-field 2330 combined with “counter number” sub-field 2335 may, thus, provide a unique determination of which beacon message 2300 has been constructed and sent at a later time than other beacon messages. “Time stamp” subfield 2340 may include a time at which the monitor point 105 sends beacon message 2300 . “# hops to D(i)” field 2325 may be set to zero, indicating that beacon message 2300 has been sent directly from monitor point 105 .
  • FIGS. 24-25 are flowcharts that illustrate exemplary processing, consistent with the present invention, for constructing and transmitting beacon messages from a monitor point 105 .
  • the method exemplified by FIGS. 24-25 can be implemented as a sequence of instructions and stored in memory 2020 of monitor point 105 for execution by processing unit 2015 .
  • Monitor point 105 may begin processing when monitor point 105 powers up from a powered down state [step 2405 ]. At power-up, monitor point 105 may retrieve an old “startup number” 2330 stored in memory 2020 [step 2410 ] and may increment the retrieved “startup number” 2330 by one [step 2415 ]. Monitor point 105 may then store the incremented “startup number 2330 ” in memory 2020 [step 2420 ].
  • Monitor point 105 may determine if an interval timer is equal to a value P [step 2425 ].
  • the interval timer may be implemented, for example, in processing unit 2015 .
  • Value P may be preset at manufacture, or may be entered or changed via input device(s) 2025 .
  • monitor point 105 may formulate a beacon message 2300 that may include the “transmitter node ID” field 2305 set to monitor point 105 's unique identifier, “NUM” field 2315 set to one, “startup number” sub-field 2330 set to the “startup number” currently stored in memory 2020 , “counter number” sub-field 2335 set to the “counter number” currently stored in memory 2020 , “time stamp” sub-field 2340 set to a current time and the “# hops to D(i)” field 2340 set to zero [step 2430 ].
  • Monitor point 105 may then calculate a checksum value of the formulated message and store the resultant checksum in “checksum” field 2310 [step 2505 ]( FIG. 25 ). Monitor point 105 may transmit the formulated beacon message 2300 [step 2510 ] and increment “counter number” 2335 stored in memory 2020 [step 2515 ]. Processing may repeat at step 2425 until power down of monitor point 105 .
  • FIG. 26 illustrates an exemplary beacon message 2600 that may be transmitted from a sensor node 205 .
  • Beacon message 2600 may include any number of fields, including “transmitter node ID” field 2305 , “checksum” field 2310 , “NUM” field 2315 , monitor point “D(i) identifier” fields 2605 a - 2605 n , monitor point “D(i) sequence #” fields 2610 a - 2610 n , and monitor point “# of Hops to D(i)” fields 2615 a - 2615 n.
  • “D(i) identifier” fields 2605 a - 2605 n may identify monitor points 105 from which a sensor node 205 has received beacon messages 2300 .
  • “D(i) sequence #” fields 2610 a - 2610 n may include “startup number” sub-fields 2620 a - 2620 n , “counter number” sub-fields 2625 a - 2625 n , and “time stamp” sub-fields 2630 a - 2630 n associated with a monitor point 105 identified by a corresponding “D(i) identifier” field 2605 .
  • “# of hops to D(i)” fields 2615 a - 2615 n may indicate the number of hops in sensor network 110 to reach the monitor point 105 identified by the corresponding “D(i) identifier” field 2605 .
  • FIG. 27 is a flowchart that illustrates exemplary processing, consistent with the present invention, for constructing and transmitting a beacon message 2600 at a sensor node 205 .
  • the method exemplified by FIG. 27 can be implemented as a sequence of instructions and stored in memory 1920 of sensor node 205 for execution by processing unit 1915 .
  • Sensor node 205 may begin processing by setting “transmitter node ID” field 2305 to sensor node 205 's unique identifier [step 2705 ]. Sensor node 205 may further set “NUM” field 2315 to the number of entries in sensor forwarding table 2105 for which the “valid” field 2145 equals one [step 2710 ]. For each valid entry 2110 in sensor forwarding table 2105 , sensor node 205 may increment the “# of Hops” field 2140 by one and copy information from the entry 2110 into a corresponding field of beacon message 2600 [step 2715 ].
  • Sensor node 205 may then calculate a checksum of beacon message 2600 and store the calculated value in “checksum” field 2310 [step 2720 ]. Sensor node 205 may then transmit beacon message 2600 every s seconds, repeating steps 2705 - 2720 for each transmitted beacon message 2600 [step 2725 ]. The value s may be set at manufacture or may be entered or changed via input device(s) 1930 . Every multiple m of s seconds, sensor node 205 may inspect the “time stamp” field 2120 of each entry 2110 of sensor forwarding table 2105 [step 2730 ]. For each entry 2110 of sensor forwarding table 2105 that includes a field that was modified more than z seconds in the past, sensor node 205 may set that entry's “valid” field 2145 to zero [step 2735 ], thus, “aging out” that entry.
  • FIGS. 28-31 are flowcharts that illustrate exemplary processing, consistent with the present invention, for receiving beacon messages from monitor points/sensor nodes and updating corresponding entries in sensor forwarding table 2105 .
  • the method exemplified by FIGS. 28-31 can be implemented as a sequence of instructions and stored in memory 1920 of sensor node 205 for execution by processing unit 1915 .
  • sensor node 205 may receive a transmitted beacon message 2300 / 2600 from either a monitor point 105 or another sensor node [step 2805 ]. Sensor node 205 may then calculate a checksum of the received beacon message 2300 / 2600 and compare the calculated checksum value with the message's “checksum” field 2310 [step 2810 ]. Sensor node 205 determines if the checksums agree, indicating that no errors or corruption of the beacon message 2300 / 2600 occurred during transmission [step 2815 ]. If the checksums do not agree, sensor node 205 may discard the message [step 2820 ].
  • sensor node 205 may inspect sensor forwarding table 2105 for any entries 2110 with the “next hop” field 2135 equal to the received message's “transmitter node ID” field 2305 [step 2825 ]. Sensor node 205 may then set the “valid” field 2145 equal to zero for all such entries with the “next hop” field 2135 equal to the received message's “transmitter node ID” field 2305 [step 2830 ].
  • Sensor node 205 may inspect the received message's “NUM” field 2315 to determine the number of monitor nodes described in the beacon message 2300 / 2600 [step 2835 ]. Sensor node 205 may then set a counter value i to 1 [step 2840 ]. Sensor node 205 may further extract the monitor node “D(i) identifier” field 2605 , the “D(i) sequence #” field 2610 , and the “# of hops to D(i)” field 2615 , corresponding to the counter value i, from beacon message 2300 / 2600 [step 2905 ] ( FIG. 29 ).
  • Sensor node 205 may inspect the “monitor point ID” field 2125 of forwarding table 2105 to determine if there is a table entry 2110 corresponding to the message “D(i) identifier” field 2605 [step 2910 ]. If no such table entry 2110 exists, sensor node 205 may create a new entry 2110 in forwarding table 2105 for monitor node D(i) [step 2915 ] and processing may proceed to step 3015 below. If there is a table entry 2110 corresponding to monitor node D(i), sensor node 205 may compare the beacon message “# of hops to D(i)” field 2615 with the “# of Hops” field 2140 in forwarding table 2105 [step 2920 ].
  • step 3015 If the message “# of hops to D(i)” field 2615 is less than, or equal to, the “# of hops” field 2140 of forwarding table 2105 , then processing may proceed to step 3015 below. If the message “# of hops to D(i)” field 2615 is greater than the “# of Hops” field 2140 , then sensor node 205 may determine if the “valid” field 2145 is set to zero for the table entry 2110 that includes the “monitor point ID” field 2125 that is equal to D(i) [step 2930 ]. If the “valid” field 2145 is equal to one, indicating that the entry is valid, processing may proceed with step 3115 below.
  • sensor node 205 may then determine if the message “startup number” field 2620 is greater than table 2105 's “startup” sub-field of “Seq #” field 2130 [step 3005 ]. If so, processing may continue with step 3015 below. If not, sensor node 205 may determine if the message “startup number” sub-field 2620 is equal to table 2105 's “startup” sub-field of “Seq #” field 2130 and the message “counter number” field 2625 is greater than table 2105 's “counter number” sub-field of “Seq #” field 2130 [step 3010 ]. If not, processing may continue with step 3115 below.
  • sensor node 205 may insert the message's “D(i) sequence #” field 2610 into table 2105 's “Seq #” field 2130 [step 3015 ]. Sensor node 205 may further insert the message's “# of hops to D(i)” field 2615 into the “# of Hops” field 2140 of table 2105 [step 3020 ]. Sensor node 205 may also insert the message's “transmitter node ID” field 2305 into the “next hop” field 2135 of table 2105 [step 3025 ).
  • Sensor node 205 may further set the “valid” flag 2145 for the table entry 2110 corresponding to the monitor point identifier D(i) to one [step 3105 ] and time stamp the entry 2110 with a local clock, storing the time stamp in “time stamp” field 2120 [step 3110 ].
  • Sensor node 205 may increment the counter value i by one [step 3115 ] and determine if the counter value i is equal to the message's “NUM” field 2315 plus one [step 3120 ]. If not, processing may return to step 2905 . If so, sensor node 205 may set the “Use?” field 2115 for all entries 2110 in forwarding table 2105 to zero [step 3125 ].
  • Sensor node 205 may inspect forwarding table 2105 to identify an entry 2110 with the “valid” flag 2145 set to one and that further has the smallest value in the “# of Hops” field 2140 [step 3130 ]. Sensor node 205 may then set the “Use?” field 2115 of the identified entry to one [step 3135 ].
  • FIG. 32 illustrates a first exemplary message datagram 3200 , consistent with the present invention, for transmitting data, such as, for example, sensor measurement data to either a destination monitor point 105 or a destination sensor node 205 in network 100 .
  • Message datagram 3200 may include a “source node ID” field 3205 , a “destination node ID” field 3210 , an optional “checksum” field 3215 , an optional “time-to-live” (TTL) field 3220 , an optional “geo-location” field 3225 , and a “sensor message” field 3230 .
  • TTL time-to-live
  • Message datagram 3200 may also optionally include a “reverse path” flag 3235 , a “direction” indicator field 3240 , a “# of Node IDs Appended” field 3245 , and “1 st Hop Node ID” 3250 a through “Nth Hop Node ID” fields 3250 N.
  • Message datagram 3200 may additionally be used for transmitting data from a source monitor point 105 to a destination sensor node 205 in network 100 .
  • “Source node ID” field 3205 may include an identifier uniquely identifying a sensor node 205 or monitor point 105 that was the original source of message datagram 3200 .
  • “Destination node ID” field 3210 may include an identifier uniquely identifying a destination monitor point 105 or sensor node 205 in network 100 .
  • “Checksum” field 3215 may include any type of conventional error detection value that can be used to determine the presence of errors or corruption in sensor datagram 3200 .
  • “TTL” field 3220 may include a value indicating a number of hops before which the message datagram 3200 should be discarded. “TTL” field 3220 may be decremented by one at each hop through network 100 .
  • “Geo-location” field 3225 may include geographic location data associated with the message datagram source node.
  • “Sensor message” field 3230 may include sensor measurement data resulting from sensor measurements performed at the sensor node 205 identified by “source node ID” field 3205 .
  • “Reverse path” flag 3235 may indicate whether sensor datagram 3200 includes information that details the path datagram 3200 traversed from the source node identified in “source node ID” field 3205 to a current node receiving datagram 3200 .
  • “Direction” indicator field 3240 may indicate the direction in network 100 that message datagram 3200 is heading. The datagram 3200 direction can be either “inbound” towards a monitor point 105 or “outbound” towards a sensor node 205 .
  • “# of Node IDs Appended” field 3245 may indicate the number of sensor nodes described in sensor datagram by 1 st through Nth “hop node ID” fields 3250 a - 3250 N.
  • “1 st Hop Node ID” field 3250 a through “Nth Hop Node ID” field 3250 N may include the unique node identifiers identifying each node in the path between the source node indicated by “source node ID” field 3205 and the node currently receiving datagram 3200 .
  • FIG. 33 is a flowchart that illustrates exemplary processing, consistent with the present invention, for fabricating and transmitting a sensor datagram 3200 at a sensor node 205 .
  • the method exemplified by FIG. 33 can be implemented as a sequence of instructions and stored in memory 1920 for execution by processing unit 1915 .
  • Sensor node 205 may begin fabrication of sensor datagram 3200 by performing sensor measurements over one or more sampling periods using one or more sensor units 1940 a - 1940 n [step 3305 ]. Sensor node 205 may then insert the sensor measurement data in the “sensor message” field 3230 [step 3310 ]. Sensor node 205 may further insert the node's own identifier in the datagram 3200 “source node ID” field 3205 [step 3115 ]. Sensor node 205 may also, optionally, insert a value in the datagram 3200 “time-to-live” field 3220 [step 3320 ]. Sensor node 205 may, optionally, insert its location in “geo-location” field 3225 [step 3325 ]. Sensor node 205 's location may be determined by geo-location unit 1935 .
  • Sensor node 205 may inspect forwarding table 2105 to identify the table entry 2110 with a “Use?” field 2115 equal to one and with the smallest “# of Hops” field 2140 [step 3330 ].
  • the monitor point identified by the “monitor point ID” field 2125 in the table entry 2110 with the “Use?” field 2115 equal to one and with the smallest “# of Hops” field 2140 will, thus, be the nearest monitor point 105 to sensor node 205 .
  • Sensor node 205 may insert the “monitor point ID” field 2125 of the table entry 2110 with a “Use?” field 2115 equal to one into datagram 3200 's “destination node ID” field 3210 [step 3335 ].
  • Sensor node 205 may determine if datagram 3200 will include “reverse path” flag 3235 [step 3405 ].
  • reverse path information may not be included in any datagram 3200 .
  • reverse path information may be included in all datagrams 3200 .
  • reverse path information may be included only in some percentage of the datagrams 3200 sent from sensor node 205 . For example, reverse path information may only be included in one datagram out of every 100 datagrams or in only one datagram every ten minutes that are sent from a source node.
  • sensor node 205 may set the datagram “reverse path” flag 3235 to one, indicating that a reverse path should be accumulated as the datagram 3200 traverses network 100 [step 3410 ]. Alternatively, sensor node 205 may set the “reverse path” flag 3235 to zero, indicating that no reverse path should be accumulated as the datagram 3200 traverses network 100 . Sensor node 205 may further set “direction” field 3240 to “inbound” by setting the value in the field to zero [step 3415 ]. Sensor node 205 may also set the “# of Node IDs Appended” field 3245 to zero [step 3420 ].
  • sensor node 205 may calculate a checksum of the fabricated datagram 3200 and insert the calculated checksum value in “checksum” field 3215 [step 3425 ]. Sensor node 205 may then transmit datagram 3200 to the next hop node indicated by the “next hop” field 2135 in the table entry 2110 with “Use?” field 2115 set to one and with the smallest “# of Hops” field 2140 [step 3430 ].
  • FIG. 35-39 are flowcharts that illustrate exemplary processing, consistent with the present invention, for relaying datagrams 3200 received at a sensor node 205 towards either a destination monitor point 105 or other sensor nodes in network 100 .
  • the method exemplified by FIGS. 35-39 can be implemented as a sequence of instructions and stored in memory 1920 of sensor node 205 for execution by processing unit 1915 .
  • Sensor node 205 may begin processing by receiving a sensor datagram 3200 [step 3505 ]. Sensor node 205 may then, optionally, calculate a checksum value of the received datagram 3200 and compare the calculated checksum with the “checksum” field 3215 contained in datagram 3200 [step 3510 ]. Sensor node 205 may determine if the checksums agree [step 3515 ], and if not, sensor node 205 may discard the received datagram 3200 [step 3520 ]. If the checksums do agree, sensor node 205 may, optionally, retrieve the “TTL” field 3220 from datagram 3200 and decrement the value by one [step 3525 ]. Sensor node 205 may then, optionally, determine if the decremented “TTL” value is equal to zero [step 3530 ]. If so, sensor node 205 may discard the datagram 3200 [step 3520 ].
  • sensor node 205 may determine if datagram 3200 does not contain a “reverse path” flag 3235 or if “reverse path” flag 3235 is set to zero [step 3535 ]. If datagram 3200 contains a “reverse path” flag that is set to one, processing may continue at step 3705 below. If datagram 3200 does not contain a “reverse path” flag 3235 or the “reverse path” flag 3235 is set to zero, sensor node 205 may retrieve the “destination node ID” field 3210 from datagram 3200 and find a table entry 2110 with the “monitor point ID” field 2125 equal to the datagram “destination node ID” field 3210 [step 3605 ]. If sensor node 205 finds that there is no table entry 2110 for the monitor point identified by the “destination node ID” field 3210 [step 3610 ], then sensor node 205 may discard datagram 3200 [step 3615 ].
  • sensor node 205 may, optionally, calculate a new checksum for datagram 3200 [step 3620 ]. Sensor node 205 may, optionally, insert the calculated checksum in “checksum” field 3215 [step 3625 ]. Sensor node 205 may further read the “next hop” field 2135 from the table entry 2110 corresponding to the monitor point identified by the datagram “destination node ID” field 3210 [step 3630 ]. Sensor node 205 may transmit datagram 3200 to the node identified by the “next hop” field 2135 [step 3635 ].
  • sensor node 205 may determine if the datagram 3200 “direction” indicator field 3240 indicates that the datagram is heading “inbound.” If not, processing may continue with step 3905 below. If so, sensor node 205 may append sensor node 205 's unique identifier to datagram 3200 as a “hop node ID” field 3250 and may increment the datagram “# of Nodes Appended” field 3245 [step 3710 ]. Sensor node 205 may read the “destination node ID” field 3210 from datagram 3200 [step 3715 ].
  • Sensor node 205 may further inspect forwarding table 2105 to locate a table entry 2110 with the “monitor point ID” field 2125 equal to the datagram “destination node ID” field 3210 [step 3720 ]. Sensor node 205 may then determine if the “valid” field 2145 in the located table entry is zero [step 3725 ]. If so, sensor node 205 may discard datagram 3200 [step 3730 ]. If not, sensor node 205 may read the “next hop” field 2135 of the located table entry [step 3805 ] and may transmit datagram 3200 to the node identified by the “next hop” field 2135 [step 3810 ].
  • sensor node 205 may determine if the datagram “destination node ID” field 3210 is equal to sensor node 205 's unique identifier [step 3905 ]. If so, sensor node 205 may read the datagram “sensor message” field 3230 [step 3910 ]. If not, sensor node 205 may determine if the “# of Node IDs Appended” field 3245 is equal to zero [step 3915 ]. If so, sensor node 205 may discard datagram 3200 [step 3920 ]. If not, sensor node 205 may choose the last “hop node ID” 3250 from datagram 3200 as the next hop for the datagram [step 3925 ].
  • Sensor node 205 may remove this “Hop node ID” field 3250 from datagram 3200 and decrement the “# of Node IDs Appended” field 3245 [step 3930 ]. Sensor node 205 may then transmit the datagram to the next hop identified by the removed “hop node ID” field 3250 [step 3935 ].
  • FIGS. 40-43 illustrate exemplary processing, consistent with the present invention, for processing datagrams 3200 received at a monitor point 105 .
  • the method exemplified by FIGS. 40-43 can be implemented as a sequence of instructions and stored in memory 2020 of monitor point 105 for execution by processing unit 2015 .
  • Monitor point 105 may begin processing by receiving a datagram 3200 from a sensor node 205 in network 100 [step 4005 ]. Monitor point 105 may, optionally, calculate a checksum value for the datagram 3200 and compare the calculated value with the datagram “checksum” field 3215 [step 4010 ]. If the checksums do not agree [step 4015 ], monitor point 105 may discard the datagram 3200 [step 4020 ]. If the checksums do agree, monitor point 105 may inspect the datagram “source node ID” field 3205 and compare the field with the “sensor ID” field 2215 in all entries 2210 of monitor point table 2205 [step 4025 ].
  • monitor point 105 may create a new table entry 2210 for the sensor node identified by the datagram “source node ID” field 3205 [step 4035 ]. Monitor point 105 may further store the datagram “source node ID” field 3205 in the “sensor ID” field 2215 of the newly created table entry [step 4040 ].
  • monitor point 105 may store the datagram “geo-location “field 3225 in the table “geo-location” field 2220 [step 4045 ]. Monitor point 105 may further store the datagram “sensor message” field 3230 in the table “sensor message” field 2225 [step 4105 ]. Monitor point 105 may determine if datagram 3200 includes a “reverse path” flag 3235 [step 4110 ]. If not, processing may continue at step 4125 . If datagram 3200 does include a “reverse path” flag, then monitor point 105 may store the datagram “# of Node IDs Appended” field 3245 in the table “# of Nodes” field 2230 [step 4115 ].
  • Monitor point 105 may store the datagram “1 st hop node ID” field 3250 a through “Nth hop node ID” field 3250 N in reverse order in table 2205 by storing in the table “Nth hop” 2235 N through “1 st hop” fields 2235 a [step 4120 ].
  • Monitor point 105 may, optionally, retrieve selected data from monitor point table 2205 and exchange data, via network 115 , with other monitor points in network 100 [step 4125 ]. Monitor point 105 may further determine if a datagram 3200 should be sent to a sensor node 205 in network 100 [step 4130 ]. Monitor point 105 may, for example, periodically send operation control data to a sensor node 205 . If no datagram 3200 is to be sent to a sensor node 205 , processing may return to step 4005 . If a datagram 3200 is to be sent to a sensor node 205 , monitor point 105 may insert its own unique identifier in the datagram 3200 “source node ID” field 3205 [step 4205 ].
  • Monitor point 105 may insert the table 2205 “sensor ID” field 2215 corresponding to the destination sensor node 205 in the datagram “destination node ID” field 3210 [step 4210 ].
  • Monitor point 105 may insert a value in the datagram “TTL” field 3220 [step 4215 ].
  • Monitor point 105 may further insert the monitor point's location in the datagram “geo-location” field 3225 [step 4220 ].
  • Monitor point 105 may further formulate a sensor message and insert the message in the datagram “sensor message” field 3230 [step 4225 ].
  • Monitor point 105 may also set the “direction” indicator field 3240 to “outbound” [step 4230 ] and may insert the table “# of Nodes” field 2230 , corresponding to the table entry 2210 with the appropriate “sensor ID” 2215 , into the datagram “# of Node IDs Appended” field 3245 [step 4235 ].
  • Monitor point 105 may insert the table “1 st Hop” field 2235 a through “Nth hop” field 2235 N into the corresponding datagram “1 st Hop Node ID” 3250 a through “Nth Hop Node ID” 3250 N fields [step 4305 ].
  • Monitor point 105 may calculate a checksum value for the datagram 3200 and insert the calculated value in the datagram “checksum” field 3215 [step 4310 ].
  • Monitor point 105 may transmit datagram 3200 to the first hop identified by the datagram “1 st Hop Node ID” field 3250 a [step 4315 ].
  • Systems and methods consistent with the present invention therefore, provide mechanisms that enable sensor node transmitters and receivers to be turned off, and remain in a “sleep” state, for substantial periods, thus, increasing the energy efficiency of the nodes.
  • Systems and methods consistent with the present invention further implement transmission and reception schedules that permit the reception and forwarding of packets containing routing, or other types of data, during short periods when the sensor node transmitters and receivers are powered up and, thus, “awake.”
  • the present invention thus, increases sensor node operationallife by reducing energy consumption while permitting the reception and forwarding of the routing messages needed to self-organize the distributed network.

Abstract

A system for conserving energy in a multi-node network (110) includes nodes (205) configured to organize themselves into tiers (305, 310,315). The nodes (205) are further configured to produce a transmit/receive schedule at a first tier (310) in the network (110) and control the powering-on and powering-off of transmitters and receivers in nodes (205) in a tier adjacent (315) to the first tier (310) according to the transmit/receive schedule.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application is a continuation of U.S. patent application Ser. No. 12/253,130 filed Oct. 16, 2008, which, in turn, is a continuation of U.S. patent application Ser. No. 12/174,512 filed Jul. 16, 2008, which, in turn, is a continuation of U.S. patent application Ser. No. 10/328,566 filed Dec. 23, 2002, now U.S. Pat. No. 7,421,257, which, in turn, is a continuation-in-part of U.S. patent application Ser. No. 09/998,946 filed Nov. 30, 2001, now U.S. Pat. No. 7,020,501, the entire contents of all of which are incorporated herein by reference.
  • GOVERNMENT CONTRACT
  • The U.S. Government has a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided for by the terms of Contract No. 2000-DT-CX-K001, awarded by the Department of Justice.
  • FIELD OF THE INVENTION
  • The present invention relates generally to ad-hoc, multi-node wireless networks and, more particularly, to systems and methods for implementing energy efficient data forwarding mechanisms in such networks.
  • BACKGROUND OF THE INVENTION
  • Recently, much research has been directed towards the building of networks of distributed wireless sensor nodes. Sensor nodes in such networks conduct measurements at distributed locations and relay the measurements, via other sensor nodes in the network, to one or more measurement data collection points. Sensor networks, generally, are envisioned as encompassing a large number (N) of sensor nodes (e.g., as many as tens of thousands of sensor nodes), with traffic flowing from the sensor nodes into a much smaller number (K) of measurement data collection points using routing protocols. These routing protocols conventionally involve the forwarding of routing packets throughout the sensor nodes of the network to distribute the routing information necessary for sensor nodes to relay measurements to an appropriate measurement data collection point.
  • A key problem with conventional sensor networks is that each sensor node of the network operates for extended periods of time on self-contained power supplies (e.g., batteries or fuel cells). For the routing protocols of the sensor network to operate properly, each sensor node must be prepared to receive and forward routing packets at any time. Each sensor node's transmitter and receiver, thus, conventionally operates in a continuous fashion to enable the sensor node to receive and forward the routing packets essential for relaying measurements from a measuring sensor node to a measurement data collection point in the network. This continuous operation depletes each node's power supply reserves and, therefore, limits the operational life of each of the sensor nodes.
  • Therefore, there exists a need for mechanisms in a wireless sensor network that enable the reduction of sensor node power consumption while, at the same time, permitting the reception and forwarding of the routing packets necessary to implement a distributed wireless network.
  • SUMMARY OF THE INVENTION
  • Systems and methods consistent with the present invention address this need and others by providing mechanisms that enable sensor node transmitters and receivers to be turned off, and remain in a “sleep” state, for substantial periods, thus, increasing the energy efficiency of the nodes. Systems and methods consistent with the present invention further implement transmission and reception schedules that permit the reception and forwarding of packets containing routing, or other types of data, during short periods when the sensor node transmitters and receivers are powered up and, thus, “awake.” The present invention, thus, increases sensor node operational life by reducing energy consumption while permitting the reception and forwarding of the routing messages needed to self-organize the distributed network.
  • In accordance with the purpose of the invention as embodied and broadly described herein, a method of conserving energy in a node in a wireless network includes receiving a first powering-on schedule from another node in the network, and selectively powering-on at least one of a transmitter and receiver based on the received first schedule.
  • In another implementation consistent with the present invention, a method of conveying messages in a sensor network includes organizing a sensor network into a hierarchy of tiers, transmitting one or more transmit/receive scheduling messages throughout the network, and transmitting and receiving data messages between nodes in adjacent tiers based on the one or more transmit/receive scheduling messages.
  • In a further implementation consistent with the present invention, a method of conserving energy in a multi-node network includes organizing the multi-node network into tiers, producing a transmit/receive schedule at a first tier in the network, and controlling the powering-on and powering-off of transmitters and receivers in nodes in a tier adjacent to the first tier according to the transmit/receive schedule.
  • In yet another implementation consistent with the present invention, a method of forwarding messages at a first node in a network includes receiving scheduling messages from a plurality of nodes in the network, selecting one of the plurality of nodes as a parent node, and selectively forwarding data messages to the parent node based on the received scheduling message associated with the selected one of the plurality of nodes.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, explain the invention. In the drawings,
  • FIG. 1 illustrates an exemplary network consistent with the present invention;
  • FIG. 2 illustrates an exemplary sensor network consistent with the present invention;
  • FIG. 3 illustrates the exemplary sensor network of FIG. 2 organized into tiers consistent with the present invention;
  • FIG. 4 illustrates exemplary components of a sensor node consistent with the present invention;
  • FIG. 5 illustrates exemplary components of a monitor point consistent with the present invention;
  • FIG. 6A illustrates an exemplary monitor point database consistent with the present invention;
  • FIG. 6B illustrates exemplary monitor point affiliation/schedule data stored in the database of FIG. 6A consistent with the present invention;
  • FIG. 7A illustrates an exemplary sensor node database consistent with the present invention;
  • FIG. 7B illustrates exemplary sensor node affiliation/schedule data stored in the database of FIG. 7A consistent with the present invention;
  • FIG. 8 illustrates an exemplary schedule message consistent with the present invention;
  • FIG. 9 illustrates exemplary transmit/receive scheduling consistent with the present invention;
  • FIGS. 10-11 are flowcharts that illustrate parent/child affiliation processing consistent with the present invention;
  • FIG. 12 is a flowchart that illustrates exemplary monitor point scheduling processing consistent with the present invention;
  • FIGS. 13-16 are flowcharts that illustrate sensor node schedule message processing consistent with the present invention;
  • FIG. 17 illustrates an exemplary message transmission diagram consistent with the present invention; and
  • FIG. 18 illustrates exemplary node receiver timing consistent with the present invention.
  • FIG. 19 illustrates exemplary components of a sensor node consistent with the present invention;
  • FIG. 20 illustrates exemplary components of a monitor point consistent with the present invention;
  • FIG. 21A illustrates a first exemplary database consistent with the present invention;
  • FIG. 21B illustrates an exemplary sensor forwarding table stored in the database of FIG. 5A consistent with the present invention;
  • FIG. 22A illustrates a second exemplary database consistent with the present invention;
  • FIG. 22B illustrates an exemplary monitor point table stored in the database of FIG. 6A consistent with the present invention;
  • FIG. 23 illustrates an exemplary monitor point beacon message consistent with the present invention;
  • FIGS. 24-25 are flowcharts that illustrate exemplary monitor point beacon message processing consistent with the present invention;
  • FIG. 26 illustrates an exemplary sensor node beacon message consistent with the present invention;
  • FIG. 27 is a flowchart that illustrates exemplary sensor node beacon message processing consistent with the present invention;
  • FIGS. 28-31 are flowcharts that illustrate exemplary sensor node forwarding table update processing consistent with the present invention;
  • FIG. 32 illustrates an exemplary sensor datagram consistent with the present invention;
  • FIGS. 33-34 are flowcharts that illustrate exemplary sensor node datagram processing consistent with the present invention;
  • FIGS. 35-39 are flowcharts that illustrate exemplary sensor node datagram relay processing consistent with the present invention; and
  • FIGS. 40-43 are flowcharts that illustrate exemplary monitor point datagram processing consistent with the present invention.
  • DETAILED DESCRIPTION
  • The following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims.
  • Systems and methods consistent with the present invention provide mechanisms for conserving energy in wireless nodes by transmitting scheduling messages throughout the nodes of the network. The scheduling messages include time schedules for selectively powering-on and powering-off node transmitters and receivers. Message datagrams and routing messages may, thus, be conveyed throughout the network during appropriate transmitter/receiver power-on and power-off intervals.
  • Exemplary Network
  • FIG. 1 illustrates an exemplary network 100, consistent with the present invention. Network 100 may include monitor points 105 a-105 n connected to sensor network 110 and network 115 via wired 120, wireless 125, or optical connection links (not shown). Network 100 may further include one or more servers 130 interconnected with network 115.
  • Monitor points 105 a-105 n may include data transceiver units for transmitting messages to, and receiving messages from, one or more sensors of sensor network 110. Such messages may include routing messages containing network routing data, message datagrams containing sensor measurement data, and schedule messages containing sensor node transmit and receive scheduling data. The routing messages may include identification data for one or more monitor points, and the number of hops to reach each respective identified monitor point, as determined by a sensor node/monitor point that is the source of the routing message. The routing messages may be transmitted as wireless broadcast messages in network 100. The routing messages, thus, permit sensor nodes to determine a minimum hop path to a monitor point in network 100. Through the use of routing messages, monitor points 105 a-105 n may operate as “sinks” for sensor measurements made at nearby sensor nodes. Message datagrams may include sensor measurement data that may be transmitted to a monitor point 105 a-105 n for data collection. Message datagrams may be sent from a monitor point to a sensor node, from a sensor node to a monitor point, or from a sensor node to a sensor node.
  • In one embodiment, monitor points 105 a-105 n may include data transceiver units for transmitting messages to and from one or more sensors of sensor network 110. Such messages may include beacon messages and message datagrams. Beacon messages may include identification data for one or more monitor points, and the number of hops to reach each respective identified monitor point, as determined by a sensor node/monitor point that is the source of the beacon message. Beacon messages may be transmitted as wireless broadcast messages in network 100. Beacon messages, thus, permit sensor nodes to determine a minimum hop path to a monitor point in network 100. Through the use of beacon messages, monitor points 105 a-105 n may operate as “sinks” for sensor measurements made at nearby sensor nodes. Message datagrams may be sent from a monitor point to a sensor node, from a sensor node to a monitor point, or from a sensor node to a sensor node. Message datagrams may include path information for transmitting message datagrams, hop by hop, from one node in network 100 to another node in network 100. Message datagrams may further include sensor measurement data that may be transmitted to a monitor point 105 a-105 n for data collection.
  • Sensor network 110 may include one or more distributed sensor nodes (not shown) that may organize themselves into an ad-hoc, multi-hop wireless network. Each of the distributed sensor nodes of sensor network 110 may include one or more of any type of conventional sensing device, such as, for example, acoustic sensors, motion-detection sensors, radar sensors, sensors that detect specific chemicals or families of chemicals, sensors that detect nuclear radiation or biological agents, magnetic sensors, electronic emissions signal sensors, thermal sensors, and visual sensors that detect or record still or moving images in the visible or other spectrum. Sensor nodes of sensor network 110 may perform one or more measurements over a sampling period and transmit the measured values via packets, datagrams, cells or the like to monitor points 105 a-105 n.
  • Network 115 may include one or more networks of any type, including a Public Land Mobile Network (PLMN), Public Switched Telephone Network (PSTN), local area network (LAN), metropolitan area network (MAN), wide area network (WAN), Internet, or Intranet. The one or more PLMNs may further include packet-switched sub-networks, such as, for example, General Packet Radio Service (GPRS), Cellular Digital Packet Data (CDPD), and Mobile IP sub-networks.
  • Server 130 may include a conventional computer, such as a desktop, laptop or the like. Server 130 may collect data, via network 115, from each monitor point 105 of network 100 and archive the data for future retrieval.
  • Exemplary Sensor Network
  • FIG. 2 illustrates an exemplary sensor network 110 consistent with the present invention. Sensor network 110 may include one or more sensor nodes 205 a-205 s that may be distributed across a geographic area. Sensor nodes 205 a-205 s may communicate with one another, and with one or more monitor points 105 a-105 n, via wireless or wire-line links (not shown), using, for example, packet-switching mechanisms. Using techniques such as those described in patent application Ser. No. 09/999,353, entitled “Systems and Methods for Scalable Routing in Ad-Hoc Wireless Sensor Networks” and filed Nov. 15, 2001 (the disclosure of which is incorporated by reference herein), sensor nodes 205 a-205 s may organize themselves into an ad-hoc, multi-hop wireless network through the communication of routing messages and message datagrams.
  • In one embodiment, sensor nodes 205 a-205 s may organize themselves into an ad-hoc, multi-hop wireless network through the communication of beacon messages and message datagrams. Beacon messages may be transmitted as wireless broadcast messages and may include identification data for one or more monitor points, and the number of hops to reach each respective identified monitor point, as determined by a sensor node/monitor point that is the source of the beacon message. Message datagrams may include path information for transmitting the message datagrams, hop by hop, from one node in network 100 to another node in network 100. Message datagrams may further include sensor measurement data that may be transmitted to a monitor point 105 a-105 n for data collection.
  • FIG. 3 illustrates sensor network 110 self-organized into tiers using conventional routing protocols, or the routing protocol described in the above-described patent application Ser. No. 09/999,353. When organized into tiers, messages may be forwarded, hop by hop through the network, from monitor points to sensor nodes, or from individual sensor nodes to monitor points that act as “sinks” for nearby sensor nodes. As shown in the exemplary network configuration illustrated in FIG. 3, monitor point MP1 105 a may act as a “sink” for message datagrams from sensor nodes 205 a-205 e, monitor point MP2 105 b may act as a “sink” for message datagrams from sensor nodes 205 f-205 l, and monitor point MP3 105 n may act as a “sink” for message datagrams from sensor nodes 205 m-205 s.
  • As further shown in FIG. 3, monitor point MP1 105 a may reside in MP1 tier 0 305, sensor nodes 205 a-205 c may reside in MP1 tier 1 310, and sensor nodes 205 d-205 e may reside in MP1 tier 2 315. Monitor point MP2 105 b may reside in MP2 tier 0 320, sensor nodes 205 f-205 h may reside in MP2 tier 1 325, sensor nodes 205 i-205 k may reside in MP2 tier 2 330 and sensor node 205 l may reside in MP2tier 3 335. Monitor point MP3 105 n may reside in MP3 tier 0 340, sensor nodes 205 m-205 o may reside in MP3 tier 1 345, sensor nodes 205 p-205 q may reside in MP3 tier 2 350 and sensor nodes 205 r-205 s may reside in MP3 tier 3 355. Each tier shown in FIG. 3 represents an additional hop that data must traverse when traveling from a sensor node to a monitor point, or from a monitor point to a sensor node. At least one node in any tier may act as a “parent” for nodes in the next higher tier (e.g., MP1 Tier 2 315). Thus, for example, sensor node 205 a acts as a “parent” node for sensor nodes 205 d-205 e. Sensor nodes 205 d-205 e may relay all messages through sensor node 205 a to reach monitor point MP1 105 a.
  • Exemplary Sensor Node
  • FIG. 4 illustrates exemplary components of a sensor node 205 consistent with the present invention. Sensor node 205 may include a transmitter/receiver 405, an antenna 410, a processing unit 415, a memory 420, an optional output device(s) 425, an optional input device(s) 430, one or more sensor units 435 a-435 n, a clock 440, and a bus 445.
  • Transmitter/receiver 405 may connect sensor node 205 to a monitor point 105 or another sensor node. For example, transmitter/receiver 405 may include transmitter and receiver circuitry well known to one skilled in the art for transmitting and/or receiving data bursts via antenna 410.
  • Processing unit 415 may perform all data processing functions for inputting, outputting and processing of data including data buffering and sensor node control functions. Memory 420 may include random access memory (RAM) and/or read only memory (ROM) that provides permanent, semi-permanent, or temporary working storage of data and instructions for use by processing unit 415 in performing processing functions. Memory 420 may also include large-capacity storage devices, such as magnetic and/or optical recording devices. Output device(s) 425 may include conventional mechanisms for outputting data in video, audio and/or hard copy format. For example, output device(s) 425 may include a conventional display for displaying sensor measurement data. Input device(s) 430 may permit entry of data into sensor node 205. Input device(s) 430 may include, for example, a touch pad or keyboard.
  • Sensor units 435 a-435 n may include one or more of any type of conventional sensing device, such as, for example, acoustic sensors, motion-detection sensors, radar sensors, sensors that detect specific chemicals or families of chemicals, sensors that detect nuclear radiation or sensors that detect biological agents such as anthrax. Each sensor unit 435 a-435 n may perform one or more measurements over a sampling period and transmit the measured values via packets, cells, datagrams, or the like to monitor points 105 a-105 n. Clock 440 may include conventional circuitry for maintaining a time base to enable the maintenance of a local time at sensor node 205. Alternatively, sensor node 205 may derive a local time from an external clock signal, such as, for example, a GPS signal, or from an internal clock synchronized to an external time base.
  • Bus 445 may interconnect the various components of sensor node 205 and permit them to communicate with one another.
  • FIG. 19 illustrates exemplary components of a sensor node 205 consistent with the present invention. Sensor node 205 may include a communication interface 1905, an antenna 1910, a processing unit 1915, a memory 1920, an optional output device(s) 1925, an optional input device(s) 1930, an optional geo-location unit 1935, one or more sensor units 1940 a-1940 n, and a bus 1945.
  • Communication interface 1905 may connect sensor node 205 to a monitor point 105 or another sensor node. For example, communication interface 1905 may include transceiver circuitry well known to one skilled in the art for transmitting and/or receiving data bursts via antenna 1910.
  • Processing unit 1915 may perform all data processing functions for inputting, outputting and processing of data including data buffering and sensor node control functions. Memory 1920 provides permanent, semi-permanent, or temporary working storage of data and instructions for use by processing unit 1915 in performing processing functions. Memory 1920 may include large-capacity storage devices, such as magnetic and/or optical recording devices. Output device(s) 1925 may include conventional mechanisms for outputting data in video, audio and/or hard copy format. For example, output device(s) 1925 may include a conventional display for displaying sensor measurement data. Input device(s) 1930 may permit entry of data into sensor node 205. Input device(s) 1930 may include, for example, a touch pad or keyboard.
  • Geo-location unit 1935 may include a conventional device for determining a geo-location of sensor node 205. For example, geo-location unit 1935 may include a Global Positioning System (GPS) receiver that can receive GPS signals and can determine corresponding geo-locations in accordance with conventional techniques.
  • Sensor units 1940 a-1940 n may include one or more of any type of conventional sensing device, such as, for example, acoustic sensors, motion-detection sensors, radar sensors, sensors that detect specific chemicals or families of chemicals, sensors that detect nuclear radiation or sensors that detect biological agents such as anthrax. Each sensor unit 1940 a-1940 n may perform one or more measurements over a sampling period and transmit the measured values via packets, cells, datagrams, or the like to monitor points 105 a-105 n.
  • Bus 1945 may interconnect the various components of sensor node 205 and permit them to communicate with one another.
  • Exemplary Monitor Point
  • FIG. 5 illustrates exemplary components of a monitor point 105 consistent with the present invention. Monitor point 105 may include a transmitter/receiver 505, an antenna 510, a processing unit 515, a memory 520, an input device(s) 525, an output device(s) 530, network interface(s) 535, a clock 540, and a bus 545.
  • Transmitter/receiver 505 may connect monitor point 105 to another device, such as another monitor point or one or more sensor nodes. For example, transmitter/receiver 505 may include transmitter and receiver circuitry well known to one skilled in the art for transmitting and/or receiving data bursts via antenna 510.
  • Processing unit 515 may perform all data processing functions for inputting, outputting, and processing of data. Memory 520 may include Random Access Memory (RAM) that provides temporary working storage of data and instructions for use by processing unit 515 in performing processing functions. Memory 520 may additionally include Read Only Memory (ROM) that provides permanent or semi-permanent storage of data and instructions for use by processing unit 515. Memory 520 can also include large-capacity storage devices, such as a magnetic and/or optical device.
  • Input device(s) 525 permits entry of data into monitor point 105 and may include a user interface (not shown). Output device(s) 530 permits the output of data in video, audio, or hard copy format. Network interface(s) 535 interconnects monitor point 105 with network 115. Clock 540 may include conventional circuitry for maintaining a time base to enable the maintenance of a local time at monitor point 105. Alternatively, monitor point 105 may derive a local time from an external clock signal, such as, for example, a GPS signal, or from an internal clock synchronized to an external time base.
  • Bus 545 interconnects the various components of monitor point 105 to permit the components to communicate with one another.
  • FIG. 20 illustrates exemplary components of a monitor point 105 consistent with the present invention. Monitor point 105 may include a communication interface 2005, an antenna 2010, a processing unit 2015, a memory 2020, an input device(s) 2025, an output device 2030, network interface(s) 2035 and a bus 2040.
  • Communication interface 2005 may connect monitor point 105 to another device, such as another monitor point or one or more sensor nodes. For example, communication interface 2005 may include transceiver circuitry well known to one skilled in the art for transmitting and/or receiving data bursts via antenna 2010.
  • Processing unit 2015 may perform all data processing functions for inputting, outputting, and processing of data. Memory 2020 may include Random Access Memory (RAM) that provides temporary working storage of data and instructions for use by processing unit 2015 in performing processing functions. Memory 2020 may additionally include Read Only Memory (ROM) that provides permanent or semi-permanent storage of data and instructions for use by processing unit 2015. Memory 2020 can also include large-capacity storage devices, such as a magnetic and/or optical device.
  • Input device(s) 2025 permits entry of data into monitor point 105 and may include a user interface (not shown). Output device(s) 2030 permits the output of data in video, audio, or hard copy format. Network interface(s) 2035 interconnects monitor point 105 with network 115.
  • Bus 2040 interconnects the various components of monitor point 105 to permit the components to communicate with one another.
  • Exemplary Monitor Point Database
  • FIG. 6A illustrates an exemplary database 600 that may be stored in memory 520 of a monitor point 105. Database 600 may include monitor point affiliation/schedule data 605 that includes identifiers of sensor nodes affiliated with monitor point 105, and scheduling data indicating times at which monitor point 105 may transmit to, or receive bursts of data from, affiliated sensor nodes. FIG. 6B illustrates exemplary data that may be contained in monitor point affiliation/schedule data 605. Monitor point affiliation/schedule data 605 may include “affiliated children IDs” data 610 and “Tx/Rx schedule” data 615. “Tx/Rx schedule” data 615 may further include “parent Tx” 620 data, “child-to-parent Tx” data 625, and “next tier activity” data 630.
  • “Affiliated children IDs” data 610 may include unique identifiers of sensor nodes 205 that are affiliated with monitor point 105 and, thus, from which monitor point 105 may receive messages. “Parent Tx” data 620 may include a time at which monitor point 105 may transmit messages to sensor nodes identified by the “affiliated children IDs” data 610. “Child-to-Parent Tx” data 625 may include times at which sensor nodes identified by “affiliated children IDs” 610 may transmit messages to monitor point 105. “Next Tier Activity” data 630 may include times at which sensor nodes identified by the “affiliated children IDs” data 610 may transmit messages to, and receive messages from, their affiliated children.
  • FIG. 21A illustrates an exemplary database 2100 that may be stored in memory 2020 of a monitor point 105. Database 2100 may include a monitor point table 2105 that further includes data regarding sensor nodes 205 in network to which monitor point 105 can transmit message datagrams. FIG. 21B illustrates an exemplary monitor point table 2105 that may include one or more table entries 2110. Each entry 2110 in monitor point table 2105 may include a number of fields, including a “sensor ID” field 2115, a “geo-location” field 2120, a “sensor message” field 2125, a “# of Nodes” field 2130, a “1st Hop” field 2135 a through a “Nth Hop” field 2135N, and a “Seq #” field 2140.
  • “Sensor ID” field 2115 may indicate a unique identifier for a sensor node 205 in sensor network 110. “Geo-location” field 2120 may indicate a geographic location associated with a sensor node 205 identified by a corresponding “sensor ID” field 2115. “Sensor message” field 2125 may include a message, such as, for example, data from measurements performed at the sensor node 205 identified by a corresponding “sensor ID” field 2115. “# of Nodes” field 2130 may indicate the number of hops across sensor network 110 to reach the sensor node 205 identified by a corresponding “sensor ID” field 2115 from monitor point 105. “1st Hop” field 2135 a through “Nth Hop” field 2135N may indicate the unique identifier of each sensor node 205 in network 110 that a message datagram must hop to reach the sensor node 205 identified by a corresponding “sensor ID” field 2115 from monitor point 105. “Seq #” field 2140 may include a startup number, counter number and time stamp sub-fields (not shown) corresponding to sequencing data that may be extracted from received beacon messages (see FIG. 26 below).
  • Exemplary Sensor Node Database
  • FIG. 7A illustrates an exemplary database 700 that may be stored in memory 420 of a sensor node 205. Database 700 may include sensor affiliation/schedule data 705 that may further include data indicating which sensor nodes are affiliated with sensor node 205 and indicating schedules for sensor node 205 to transmit and receive messages.
  • FIG. 7B illustrates exemplary sensor affiliation/schedule data 705. Sensor affiliation/schedule data 705 may include “designated parent ID” data 710, “parent's schedule” data 715, “derived schedule” data 720, and “affiliated children IDs” data 725. “Designated parent ID” data 710 may include a unique identifier that identifies the “parent” node, in a lower tier of sensor network 110, to which sensor node 205 forwards messages. “Parent's schedule” data 715 may further include “parent Tx” data 620, “child-to-parent Tx” data 625 and “next tier activity” data 630. “Derived schedule” data 720 may further include “this node Tx” data 730, “children-to-this node Tx” data 735, and “this node's next tier activity” data 740. “This node Tx data 730 may indicate a time at which sensor node 205 forwards messages to sensor nodes identified by “affiliated children IDs” data 725. “Children-to-this node Tx” data 735 may indicate times at which sensor nodes identified by “affiliated children IDs” data 725 may forward messages to sensor node 205. “This node's next tier activity” 740 may indicate one or more time periods allocated to sensor nodes in the next higher tier for transmitting and receiving messages.
  • FIG. 22A illustrates an exemplary database 2200 that may be stored in memory 1920 of a sensor node 205. Database 2200 may include a sensor forwarding table 2205 that further includes data for forwarding beacon messages and/or message datagrams received at a sensor node 205.
  • FIG. 22B illustrates an exemplary sensor forwarding table 2205 that may include one or more table entries 2210. Each entry 2210 in sensor forwarding table 2205 may include a number of fields, including a “Use?” field 2215, a “time stamp” field 2220, a “monitor point ID” field 2225, a “Seq #” field 2230, a “next hop” field 2235, a “# of hops” field 2240 and a “valid” field 2245.
  • “Use?” field 2215 may identify the “monitor point ID” field 2225 that sensor node 205 will use to identify the monitor point 105 to which it will send all of its message datagrams. The identified monitor point may include the monitor point that has the least number of hops to reach from sensor node 205. “Time stamp” field 2220 may indicate a time associated with each entry 2210 in sensor forwarding table 2205. “Monitor point ID” field 2225 may include a unique identifier that identifies a monitor point 105 in network 100 associated with each entry 2210 in forwarding table 2205. “Seq #” field 2230 may include the sequence number of the most recent beacon message received from the monitor point 105 identified in the corresponding “monitor point ID” field 2225. “Seq #” field 2230 may further include “startup number,” “counter number,” and “time stamp” sub-fields (not shown). The “startup number” sub-field may include a number indicating how many times the monitor point 105 identified in the corresponding “monitor point ID” field 2225 has been powered up. The “counter number” sub-field may include a number indicating the number of times the monitor point 105 identified in the corresponding “monitor point ID” field 2225 has sent a beacon message. The “time stamp” sub-field may include a time at which the monitor point 105 identified in “monitor point ID” field 2225 sent a beacon message from which the data in the corresponding entry 2210 was derived. Monitor point 105 may derive the time from an external clock signal, such as, for example, a GPS signal, or from an internal clock synchronized to an external time base.
  • “Next hop” field 2235 may include an identifier for a neighboring sensor node from which the sensor node 205 received information concerning the monitor point 105 identified by “monitor point ID” field 2225. The “# of hops” field 2240 may indicate the number of hops from sensor node 205 to reach the monitor point 105 identified by the corresponding “monitor point ID” field 2225. “Valid” field 2245 may indicate whether data stored in the fields of the corresponding table entry 2210 should be propagated in beacon messages sent from sensor node 205.
  • Exemplary Schedule Message
  • FIG. 8 illustrates an exemplary schedule message 800 that maybe transmitted from a monitor point 105 or sensor node 205 for scheduling message transmit and receive times within sensor network 110. Schedule message 800 may include a number of data fields, including “transmitting node ID” data 805, “parent Tx” data 620, and “next-tier node transmit schedule” data 810. “Next-tier node transmit schedule” 810 may further include “child-to-parent Tx” data 625 and “next tier activity” data 630. “Transmitting node ID” data 805 may include a unique identifier of the monitor point 105 or sensor node 205 originating the schedule message 800.
  • Exemplary Transmit/Receive Scheduling
  • FIG. 9 illustrates exemplary transmit/receive scheduling that may be employed at each sensor node 205 of network 110 according to schedule messages 800 received from “parent” nodes in a lower tier. The first time period shown on the scheduling timeline, Parent Tx time 620, may include the time period allocated by a “parent” node to transmitting messages from the “parent” node to its affiliated children. The time periods “child-to-parent Tx” 625 may include time periods allocated to each affiliated child of a parent node for transmitting messages to the parent node. During the “child-to-parent Tx” 625 time periods, the receiver of the parent node may be turned on to receive messages from the affiliated children.
  • The “next tier activity” 630 may include time periods allocated to each child of a parent node for transmitting messages to, and receiving messages from, each child's own children nodes. From the time periods allocated to the children of a parent node, each child may construct its own derived schedule. This derived schedule may include a time period, “this node Tx” 730 during which the child node may transmit to its own affiliated children. The derived schedule may further include time periods, “children-to-this node Tx” 735 during which these affiliated children may transmit messages to the parent's child node. The derived schedule may additionally include time periods, designated “this node's next tier activity” 740, that may be allocated to this node's children so that they may, in turn, construct their own derived schedule for their own affiliated children.
  • Exemplary Parent/Child Affiliation Processing
  • FIGS. 10-11 are flowcharts that illustrate exemplary processing, consistent with the present invention, for affiliating “child” sensor nodes 205 with “parent” nodes in a lower tier. Such “parent” nodes may include other sensor nodes 205 in sensor network 110 or monitor points 105. As one skilled in the art will appreciate, the method exemplified by FIGS. 10 and 11 can be implemented as a sequence of instructions and stored in memory 420 of sensor node 205 for execution by processing unit 415.
  • An unaffiliated sensor node 205 may begin parent/child affiliation processing by turning on its receiver 405 and continuously listening for schedule message(s) transmitted from a lower tier of sensor network 110 [step 1005] (FIG. 10). Sensor node 205 may be unaffiliated with any “parent” node if it has recently been powered on. Sensor node 205 may also be unaffiliated if it has stopped receiving schedule messages from its “parent” node for a specified time period. If one or more schedule messages are received [step 1010], unaffiliated sensor node 205 may select a neighboring node to designate as a parent [step 1015]. For example, sensor node 205 may select a neighboring node whose transmit signal has the greatest strength or the least bit error rate (BER). Sensor node 205 may insert the “transmitting node ID” data 805 from the corresponding schedule message 800 of the selected neighboring node into the “designated parent ID” data 710 of database 700 [step 1020]. Sensor node 205 may then update database 700's “parent's schedule” data 715 with “parent Tx” data 620, “child-to-parent Tx” data 625, and “next tier activity” data 630 from the corresponding schedule message 800 of the selected neighboring node [step 1025].
  • Sensor node 205 may determine if any affiliation messages have been received from sensor nodes residing in higher tiers [step 1105] (FIG. 11). If so, sensor node 205 may store message node identifiers contained in the affiliation messages in database 700's “affiliation children IDs” data 725 [step 1110]. Sensor node 205 may also transmit an affiliation message to the node identified by “designated parent ID” data 710 in database 700 [step 1115]. Sensor node 205 may further determine a derived schedule from the “next tier activity” data 630 in database 700 [step 1120] and store in the “derived schedule” data 720.
  • Exemplary Monitor Point Message Processing
  • FIG. 12 is a flowchart that illustrates exemplary processing, consistent with the present invention, for receiving affiliation messages and transmitting schedule messages at a monitor point 105. As one skilled in the art will appreciate, the method exemplified by FIG. 12 can be implemented as a sequence of instructions and stored in memory 520 of monitor point 105 for execution by processing unit 515.
  • Monitor point message processing may begin with a monitor point 105 receiving one or more affiliation messages from neighboring sensor nodes [step 1205] (FIG. 12). Monitor point 105 may insert the node identifiers from the received affiliation message(s) into database 600's “affiliation children IDs” data 610 [step 1210]. Monitor point 105 may construct the “Tx/Rx schedule” 615 based on the number of affiliated children indicated in “affiliated children IDs” data 610 [step 1215]. Monitor point 105 may then transmit a schedule message 800 to sensor nodes identified by “affiliated children IDs” data 610 containing monitor point 105's “transmitting node ID” data 805, “parent Tx” data 620, and “next-tier transmit schedule” data 810 [step 1220]. Schedule message 800 may be transmitted periodically using conventional multiple access mechanisms, such as, for example, Carrier Sense Multiple Access (CSMA). Subsequent to transmission of schedule message 800, monitor point 105 may determine if acknowledgements (ACKs) have been received from all affiliated children [step 1225]. If not, monitor point 105 may re-transmit the schedule message 800 at regular intervals until ACKs are received from all affiliated children [step 1230]. In this manner, monitor point 105 coordinates and schedules the power on/off intervals of the sensor nodes that is associated with (i.e., the nodes with which it transmits/receives data from).
  • Exemplary Message Reception/Transmission Processing
  • FIGS. 13-16 are flowcharts that illustrate exemplary processing, consistent with the present invention, for receiving and/or transmitting messages at a sensor node 205. As one skilled in the art will appreciate, the method exemplified by FIGS. 13-16 can be implemented as a sequence of instructions and stored in memory 420 of sensor node 205 for execution by processing unit 415. The exemplary reception and transmission of messages at a sensor node as illustrated in FIGS. 13-16 is further demonstrated with respect to the exemplary messages transmission diagram illustrated in FIG. 17.
  • Sensor node 205 (“This node” 1710 of FIG. 17) may begin processing by determining if it is the next parent transmit time as indicated by clock 440 and the “parent Tx” data 620 of database 700 [step 1305]. If so, sensor node 205 may turn on receiver 405 [step 1310] (FIG. 13) and listen for messages transmitted from a parent (see also “Parent Node” 1705 of FIG. 17). If no messages are received, sensor node 205 determines if a receive timer has expired [step 1405] (FIG. 14). The receive timer may indicate a maximum time period that sensor node 205 (see “This Node” 1710 of FIG. 17) may listen for messages before turning off receiver 405. If the receive timer has not expired, processing may return to step 1315. If the receive timer has expired, sensor node 205 may turn off receiver 405 [step 1410]. If messages have been received (see “Parent TX” 620 of FIG. 17), sensor node 205 may, optionally, transmit an ACK to the parent node that transmitted the messages [step 1320]. Sensor node 205 may then turn off receiver 405 [step 1325].
  • Inspecting the received messages, sensor node 205 may determine if sensor node 205 is the destination of each of the received messages [step 1330]. If so, sensor node 205 may process the message [step 1335]. If not, sensor node 205 may determine a next hop in sensor network 110 for the message using conventional routing tables, and place the message in a forwarding queue [step 1340]. At step 1415, sensor node 205 may determine if it is time to transmit messages to the parent node as indicated by “child-to-parent Tx” data 625 of database 700 (see “child-to-parent Tx” 625 of FIG. 17). If not, sensor node 205 may sleep until clock 440 indicates that it is time to transmit messages to the parent node [step 1420]. If clock 440 and “child-to-parent Tx” data 625 indicate that it is time to transmit messages to the parent node, sensor node 205 may turn on transmitter 405 and transmit all messages intended to go to the node indicated by the “designated parent ID” data 710 of database 700 [step 1425]. After all messages are transmitted to the parent node, sensor node 205 may turn off transmitter 405 [step 1430].
  • Sensor node 205 may create a new derived schedule for it's children identified by “affiliated children IDs” data 725, based on the “parent's schedule” 715, and may then store the new derived schedule in the “derived schedule” data 720 of database 700 [step 1435]. Sensor node 205 may inspect the “this node Tx” data 730 of database 700 to determine if it is time to transmit to the sensor nodes identified by the “affiliated children IDs” data 725 [step 1505] (FIG. 15). If so, sensor node 205 may turn on transmitter 405 and transmit messages, including schedule messages, to its children [step 1510] (see “This Node Tx” 730, FIG. 17). For each transmitted message, sensor node 205 may, optionally, determine if an ACK is received [step 1515]. If not, sensor node 205 may further, optionally, re-transmit the corresponding message at a regular interval until an ACK is received [step 1520]. When all ACKs are received, sensor node 205 may turn off transmitter 405 [step 1525]. Sensor node 205 may then determine if it is time for its children to transmit to sensor node 205 as indicated by clock 440 and “children-to-this node Tx” data 735 of database 700 [step 1605] (FIG. 16). If so, sensor node 205 may turn on receiver 405 and receive one or messages from the children identified by the “affiliated children IDs” data 725 of database 700 [step 1610] (see “Children-to-this Node Tx” 735, FIG. 17). Sensor node 205 may then turn off receiver 405 [step 1615] and processing may return to step 1305 (FIG. 13). In this manner, sensor nodes may power on and off their transmitters and receivers at appropriate times to conserve energy, while still performing their intended functions in network 100.
  • Exemplary Receiver Timing
  • FIG. 18 illustrates exemplary receiver timing when monitor points 105 or sensor nodes 205 of network 100 use internal clocks that may have inherent “clock drift.” “Clock drift” occurs when internal clocks runs faster or slower than the true elapsed time and may be inherent in many types of internal clocks employed in monitor points 105 or sensor nodes 205. “Clock drift” may be taken into account when scheduling the time at which a node's receiver must be turned on, since both the transmitting node and the receiving node may both have drifting clocks. As shown in FIG. 18, T nominal 1805 represents the next time at which a receiver must be turned on based on scheduling data contained in the schedule message received from a parent node. A “Rx Drift Window” 1810 exists around this time which represents Tnominal plus or minus the “Max Rx Drift” 1815 for this node over the amount of time remaining until Tnominal. If the transmitting node has zero clock drift, the receiving node should, thus, wake up at the beginning of its “Rx Drift Window” 1810.
  • The clock at the transmitting node may also incur clock drift, “Max Tx Drift” 1820, that must be accounted for at the receiving node when turning on and off the receiver. The receiving node should, thus, turn on its receiver at a local clock time that is “Max Tx Drift” 1820 plus “Max Rx Drift” 1815 before Tnominal. The receiving node should also turn off its receiver at a local clock time that is “Max Rx Drift” 1815 plus “Max Tx Drift” 1820 plus a maximum estimated time to receive a packet from the transmitting node (TRX 1825). T RX 1825 may include packet transmission time and packet propagation time. By taking into account maximum estimated clock drift at both the receiving node and transmitting node, monitor points 105 and sensor nodes 205 of sensor network 110 may successfully implement transmit/receive scheduling as described above with respect to FIGS. 1-17.
  • Exemplary Monitor Point Beacon Message Processing
  • FIG. 23 illustrates an exemplary beacon message 2300 that may be transmitted from a monitor point 105. Beacon message 2300 may include a number of fields, including a “transmitter node ID” field 2305, a “checksum” field 2310, a “NUM” field 2315, a monitor point “D(i) sequence #” field 2320 and a “# of hops to D(i)” field 2325.
  • “Transmitter node ID” field 2305 may include a unique identifier that identifies the node in network 100 that is the source of beacon message 2300. “Checksum” field 2310 may include any type of conventional error detection value that can be used to determine the presence of errors or corruption in beacon message 2300. “NUM” field 2315 may indicate the number of different monitor points 105 that are described in beacon message 2300. When beacon message 2300 is sent directly from a monitor point 105, “NUM” field 2315 can be set to one, indicating that the message describes only a single monitor point. “D(i) Sequence #” field 2320 may include a “startup number” sub-field 2330, a “counter number” sub-field 2335, and an optional “time stamp” sub-field 2340 corresponding to the monitor point 105 identified by “transmitter node ID” field 2305. “Startup number” sub-field 2330 may include a large number of data bits, such as, for example, 32 bits and may be stored in memory 2020. “Startup number” 2330 may be set to zero when monitor point 105 is initially manufactured. At every power-up of monitor point 105, processing unit 2015 can read the “startup number” stored in memory 2020, increment the number, and store the incremented startup number back in memory 2020. “Startup number” 2330, thus, maintains a log of how many times monitor point 105 has been powered up.
  • “Counter number” sub-field 2335 may be set to zero whenever monitor point 105 powers up. Counter number sub-field 2335 may further be incremented by one each time monitor point sends a beacon message 2300. “Start-up number” sub-field 2330 combined with “counter number” sub-field 2335 may, thus, provide a unique determination of which beacon message 2300 has been constructed and sent at a later time than other beacon messages. “Time stamp” subfield 2340 may include a time at which the monitor point 105 sends beacon message 2300. “# hops to D(i)” field 2325 may be set to zero, indicating that beacon message 2300 has been sent directly from monitor point 105.
  • FIGS. 24-25 are flowcharts that illustrate exemplary processing, consistent with the present invention, for constructing and transmitting beacon messages from a monitor point 105. As one skilled in the art will appreciate, the method exemplified by FIGS. 24-25 can be implemented as a sequence of instructions and stored in memory 2020 of monitor point 105 for execution by processing unit 2015.
  • Monitor point 105 may begin processing when monitor point 105 powers up from a powered down state [step 2405]. At power-up, monitor point 105 may retrieve an old “startup number” 2330 stored in memory 2020 [step 2410] and may increment the retrieved “startup number” 2330 by one [step 2415]. Monitor point 105 may then store the incremented “startup number 2330” in memory 2020 [step 2420].
  • Monitor point 105 may determine if an interval timer is equal to a value P [step 2425]. The interval timer may be implemented, for example, in processing unit 2015. Value P may be preset at manufacture, or may be entered or changed via input device(s) 2025. If the interval timer is equal to the value P, then monitor point 105 may formulate a beacon message 2300 that may include the “transmitter node ID” field 2305 set to monitor point 105's unique identifier, “NUM” field 2315 set to one, “startup number” sub-field 2330 set to the “startup number” currently stored in memory 2020, “counter number” sub-field 2335 set to the “counter number” currently stored in memory 2020, “time stamp” sub-field 2340 set to a current time and the “# hops to D(i)” field 2340 set to zero [step 2430]. Monitor point 105 may then calculate a checksum value of the formulated message and store the resultant checksum in “checksum” field 2310 [step 2505](FIG. 25). Monitor point 105 may transmit the formulated beacon message 2300 [step 2510] and increment “counter number” 2335 stored in memory 2020 [step 2515]. Processing may repeat at step 2425 until power down of monitor point 105.
  • Exemplary Sensor Node Beacon Message Processing
  • FIG. 26 illustrates an exemplary beacon message 2600 that may be transmitted from a sensor node 205. Beacon message 2600 may include any number of fields, including “transmitter node ID” field 2305, “checksum” field 2310, “NUM” field 2315, monitor point “D(i) identifier” fields 2605 a-2605 n, monitor point “D(i) sequence #” fields 2610 a-2610 n, and monitor point “# of Hops to D(i)” fields 2615 a-2615 n.
  • “D(i) identifier” fields 2605 a-2605 n may identify monitor points 105 from which a sensor node 205 has received beacon messages 2300. “D(i) sequence #” fields 2610 a-2610 n may include “startup number” sub-fields 2620 a-2620 n, “counter number” sub-fields 2625 a-2625 n, and “time stamp” sub-fields 2630 a-2630 n associated with a monitor point 105 identified by a corresponding “D(i) identifier” field 2605. “# of hops to D(i)” fields 2615 a-2615 n may indicate the number of hops in sensor network 110 to reach the monitor point 105 identified by the corresponding “D(i) identifier” field 2605.
  • FIG. 27 is a flowchart that illustrates exemplary processing, consistent with the present invention, for constructing and transmitting a beacon message 2600 at a sensor node 205. As one skilled in the art will appreciate, the method exemplified by FIG. 27 can be implemented as a sequence of instructions and stored in memory 1920 of sensor node 205 for execution by processing unit 1915.
  • Sensor node 205 may begin processing by setting “transmitter node ID” field 2305 to sensor node 205's unique identifier [step 2705]. Sensor node 205 may further set “NUM” field 2315 to the number of entries in sensor forwarding table 2105 for which the “valid” field 2145 equals one [step 2710]. For each valid entry 2110 in sensor forwarding table 2105, sensor node 205 may increment the “# of Hops” field 2140 by one and copy information from the entry 2110 into a corresponding field of beacon message 2600 [step 2715].
  • Sensor node 205 may then calculate a checksum of beacon message 2600 and store the calculated value in “checksum” field 2310 [step 2720]. Sensor node 205 may then transmit beacon message 2600 every s seconds, repeating steps 2705-2720 for each transmitted beacon message 2600 [step 2725]. The value s may be set at manufacture or may be entered or changed via input device(s) 1930. Every multiple m of s seconds, sensor node 205 may inspect the “time stamp” field 2120 of each entry 2110 of sensor forwarding table 2105 [step 2730]. For each entry 2110 of sensor forwarding table 2105 that includes a field that was modified more than z seconds in the past, sensor node 205 may set that entry's “valid” field 2145 to zero [step 2735], thus, “aging out” that entry.
  • Exemplary Sensor Node Beacon Message Processing
  • FIGS. 28-31 are flowcharts that illustrate exemplary processing, consistent with the present invention, for receiving beacon messages from monitor points/sensor nodes and updating corresponding entries in sensor forwarding table 2105. As one skilled in the art will appreciate, the method exemplified by FIGS. 28-31 can be implemented as a sequence of instructions and stored in memory 1920 of sensor node 205 for execution by processing unit 1915.
  • To begin processing, sensor node 205 may receive a transmitted beacon message 2300/2600 from either a monitor point 105 or another sensor node [step 2805]. Sensor node 205 may then calculate a checksum of the received beacon message 2300/2600 and compare the calculated checksum value with the message's “checksum” field 2310 [step 2810]. Sensor node 205 determines if the checksums agree, indicating that no errors or corruption of the beacon message 2300/2600 occurred during transmission [step 2815]. If the checksums do not agree, sensor node 205 may discard the message [step 2820]. If the checksums agree, sensor node 205 may inspect sensor forwarding table 2105 for any entries 2110 with the “next hop” field 2135 equal to the received message's “transmitter node ID” field 2305 [step 2825]. Sensor node 205 may then set the “valid” field 2145 equal to zero for all such entries with the “next hop” field 2135 equal to the received message's “transmitter node ID” field 2305 [step 2830].
  • Sensor node 205 may inspect the received message's “NUM” field 2315 to determine the number of monitor nodes described in the beacon message 2300/2600 [step 2835]. Sensor node 205 may then set a counter value i to 1 [step 2840]. Sensor node 205 may further extract the monitor node “D(i) identifier” field 2605, the “D(i) sequence #” field 2610, and the “# of hops to D(i)” field 2615, corresponding to the counter value i, from beacon message 2300/2600 [step 2905] (FIG. 29).
  • Sensor node 205 may inspect the “monitor point ID” field 2125 of forwarding table 2105 to determine if there is a table entry 2110 corresponding to the message “D(i) identifier” field 2605 [step 2910]. If no such table entry 2110 exists, sensor node 205 may create a new entry 2110 in forwarding table 2105 for monitor node D(i) [step 2915] and processing may proceed to step 3015 below. If there is a table entry 2110 corresponding to monitor node D(i), sensor node 205 may compare the beacon message “# of hops to D(i)” field 2615 with the “# of Hops” field 2140 in forwarding table 2105 [step 2920]. If the message “# of hops to D(i)” field 2615 is less than, or equal to, the “# of hops” field 2140 of forwarding table 2105, then processing may proceed to step 3015 below. If the message “# of hops to D(i)” field 2615 is greater than the “# of Hops” field 2140, then sensor node 205 may determine if the “valid” field 2145 is set to zero for the table entry 2110 that includes the “monitor point ID” field 2125 that is equal to D(i) [step 2930]. If the “valid” field 2145 is equal to one, indicating that the entry is valid, processing may proceed with step 3115 below. If the “valid” field 2145 is equal to zero, sensor node 205 may then determine if the message “startup number” field 2620 is greater than table 2105's “startup” sub-field of “Seq #” field 2130 [step 3005]. If so, processing may continue with step 3015 below. If not, sensor node 205 may determine if the message “startup number” sub-field 2620 is equal to table 2105's “startup” sub-field of “Seq #” field 2130 and the message “counter number” field 2625 is greater than table 2105's “counter number” sub-field of “Seq #” field 2130 [step 3010]. If not, processing may continue with step 3115 below. If so, sensor node 205 may insert the message's “D(i) sequence #” field 2610 into table 2105's “Seq #” field 2130 [step 3015]. Sensor node 205 may further insert the message's “# of hops to D(i)” field 2615 into the “# of Hops” field 2140 of table 2105 [step 3020]. Sensor node 205 may also insert the message's “transmitter node ID” field 2305 into the “next hop” field 2135 of table 2105 [step 3025).
  • Sensor node 205 may further set the “valid” flag 2145 for the table entry 2110 corresponding to the monitor point identifier D(i) to one [step 3105] and time stamp the entry 2110 with a local clock, storing the time stamp in “time stamp” field 2120 [step 3110]. Sensor node 205 may increment the counter value i by one [step 3115] and determine if the counter value i is equal to the message's “NUM” field 2315 plus one [step 3120]. If not, processing may return to step 2905. If so, sensor node 205 may set the “Use?” field 2115 for all entries 2110 in forwarding table 2105 to zero [step 3125]. Sensor node 205 may inspect forwarding table 2105 to identify an entry 2110 with the “valid” flag 2145 set to one and that further has the smallest value in the “# of Hops” field 2140 [step 3130]. Sensor node 205 may then set the “Use?” field 2115 of the identified entry to one [step 3135].
  • Exemplary Sensor Node Datagram Processing
  • FIG. 32 illustrates a first exemplary message datagram 3200, consistent with the present invention, for transmitting data, such as, for example, sensor measurement data to either a destination monitor point 105 or a destination sensor node 205 in network 100. Message datagram 3200 may include a “source node ID” field 3205, a “destination node ID” field 3210, an optional “checksum” field 3215, an optional “time-to-live” (TTL) field 3220, an optional “geo-location” field 3225, and a “sensor message” field 3230. Message datagram 3200 may also optionally include a “reverse path” flag 3235, a “direction” indicator field 3240, a “# of Node IDs Appended” field 3245, and “1st Hop Node ID” 3250 a through “Nth Hop Node ID” fields 3250N. Message datagram 3200 may additionally be used for transmitting data from a source monitor point 105 to a destination sensor node 205 in network 100.
  • “Source node ID” field 3205 may include an identifier uniquely identifying a sensor node 205 or monitor point 105 that was the original source of message datagram 3200. “Destination node ID” field 3210 may include an identifier uniquely identifying a destination monitor point 105 or sensor node 205 in network 100. “Checksum” field 3215 may include any type of conventional error detection value that can be used to determine the presence of errors or corruption in sensor datagram 3200. “TTL” field 3220 may include a value indicating a number of hops before which the message datagram 3200 should be discarded. “TTL” field 3220 may be decremented by one at each hop through network 100. “Geo-location” field 3225 may include geographic location data associated with the message datagram source node. “Sensor message” field 3230 may include sensor measurement data resulting from sensor measurements performed at the sensor node 205 identified by “source node ID” field 3205.
  • “Reverse path” flag 3235 may indicate whether sensor datagram 3200 includes information that details the path datagram 3200 traversed from the source node identified in “source node ID” field 3205 to a current node receiving datagram 3200. “Direction” indicator field 3240 may indicate the direction in network 100 that message datagram 3200 is heading. The datagram 3200 direction can be either “inbound” towards a monitor point 105 or “outbound” towards a sensor node 205. “# of Node IDs Appended” field 3245 may indicate the number of sensor nodes described in sensor datagram by 1st through Nth “hop node ID” fields 3250 a-3250N. “1st Hop Node ID” field 3250 a through “Nth Hop Node ID” field 3250N may include the unique node identifiers identifying each node in the path between the source node indicated by “source node ID” field 3205 and the node currently receiving datagram 3200.
  • FIG. 33 is a flowchart that illustrates exemplary processing, consistent with the present invention, for fabricating and transmitting a sensor datagram 3200 at a sensor node 205. As one skilled in the art will appreciate, the method exemplified by FIG. 33 can be implemented as a sequence of instructions and stored in memory 1920 for execution by processing unit 1915.
  • Sensor node 205 may begin fabrication of sensor datagram 3200 by performing sensor measurements over one or more sampling periods using one or more sensor units 1940 a-1940 n [step 3305]. Sensor node 205 may then insert the sensor measurement data in the “sensor message” field 3230 [step 3310]. Sensor node 205 may further insert the node's own identifier in the datagram 3200 “source node ID” field 3205 [step 3115]. Sensor node 205 may also, optionally, insert a value in the datagram 3200 “time-to-live” field 3220 [step 3320]. Sensor node 205 may, optionally, insert its location in “geo-location” field 3225 [step 3325]. Sensor node 205's location may be determined by geo-location unit 1935.
  • Sensor node 205 may inspect forwarding table 2105 to identify the table entry 2110 with a “Use?” field 2115 equal to one and with the smallest “# of Hops” field 2140 [step 3330]. The monitor point identified by the “monitor point ID” field 2125 in the table entry 2110 with the “Use?” field 2115 equal to one and with the smallest “# of Hops” field 2140 will, thus, be the nearest monitor point 105 to sensor node 205. Sensor node 205 may insert the “monitor point ID” field 2125 of the table entry 2110 with a “Use?” field 2115 equal to one into datagram 3200's “destination node ID” field 3210 [step 3335].
  • Sensor node 205 may determine if datagram 3200 will include “reverse path” flag 3235 [step 3405]. In some implementations consistent with the present invention, reverse path information may not be included in any datagram 3200. In other implementations consistent with the present invention, reverse path information may be included in all datagrams 3200. In yet further implementations consistent with the present invention, reverse path information may be included only in some percentage of the datagrams 3200 sent from sensor node 205. For example, reverse path information may only be included in one datagram out of every 100 datagrams or in only one datagram every ten minutes that are sent from a source node.
  • If datagram 3200 will not include a reverse path flag, processing may continue at step 3425. If datagram 3200 will include a reverse path flag, then sensor node 205 may set the datagram “reverse path” flag 3235 to one, indicating that a reverse path should be accumulated as the datagram 3200 traverses network 100 [step 3410]. Alternatively, sensor node 205 may set the “reverse path” flag 3235 to zero, indicating that no reverse path should be accumulated as the datagram 3200 traverses network 100. Sensor node 205 may further set “direction” field 3240 to “inbound” by setting the value in the field to zero [step 3415]. Sensor node 205 may also set the “# of Node IDs Appended” field 3245 to zero [step 3420].
  • At step 3425, sensor node 205 may calculate a checksum of the fabricated datagram 3200 and insert the calculated checksum value in “checksum” field 3215 [step 3425]. Sensor node 205 may then transmit datagram 3200 to the next hop node indicated by the “next hop” field 2135 in the table entry 2110 with “Use?” field 2115 set to one and with the smallest “# of Hops” field 2140 [step 3430].
  • Exemplary Datagram Relaying Processing
  • FIG. 35-39 are flowcharts that illustrate exemplary processing, consistent with the present invention, for relaying datagrams 3200 received at a sensor node 205 towards either a destination monitor point 105 or other sensor nodes in network 100. As one skilled in the art will appreciate, the method exemplified by FIGS. 35-39 can be implemented as a sequence of instructions and stored in memory 1920 of sensor node 205 for execution by processing unit 1915.
  • Sensor node 205 may begin processing by receiving a sensor datagram 3200 [step 3505]. Sensor node 205 may then, optionally, calculate a checksum value of the received datagram 3200 and compare the calculated checksum with the “checksum” field 3215 contained in datagram 3200 [step 3510]. Sensor node 205 may determine if the checksums agree [step 3515], and if not, sensor node 205 may discard the received datagram 3200 [step 3520]. If the checksums do agree, sensor node 205 may, optionally, retrieve the “TTL” field 3220 from datagram 3200 and decrement the value by one [step 3525]. Sensor node 205 may then, optionally, determine if the decremented “TTL” value is equal to zero [step 3530]. If so, sensor node 205 may discard the datagram 3200 [step 3520].
  • If the decremented “TTL” value is not equal to zero, sensor node 205 may determine if datagram 3200 does not contain a “reverse path” flag 3235 or if “reverse path” flag 3235 is set to zero [step 3535]. If datagram 3200 contains a “reverse path” flag that is set to one, processing may continue at step 3705 below. If datagram 3200 does not contain a “reverse path” flag 3235 or the “reverse path” flag 3235 is set to zero, sensor node 205 may retrieve the “destination node ID” field 3210 from datagram 3200 and find a table entry 2110 with the “monitor point ID” field 2125 equal to the datagram “destination node ID” field 3210 [step 3605]. If sensor node 205 finds that there is no table entry 2110 for the monitor point identified by the “destination node ID” field 3210 [step 3610], then sensor node 205 may discard datagram 3200 [step 3615].
  • If sensor node 205 finds a table entry 2110 for the monitor point identified by the “destination node ID” field 3210, sensor node 205 may, optionally, calculate a new checksum for datagram 3200 [step 3620]. Sensor node 205 may, optionally, insert the calculated checksum in “checksum” field 3215 [step 3625]. Sensor node 205 may further read the “next hop” field 2135 from the table entry 2110 corresponding to the monitor point identified by the datagram “destination node ID” field 3210 [step 3630]. Sensor node 205 may transmit datagram 3200 to the node identified by the “next hop” field 2135 [step 3635].
  • At step 3705, sensor node 205 may determine if the datagram 3200 “direction” indicator field 3240 indicates that the datagram is heading “inbound.” If not, processing may continue with step 3905 below. If so, sensor node 205 may append sensor node 205's unique identifier to datagram 3200 as a “hop node ID” field 3250 and may increment the datagram “# of Nodes Appended” field 3245 [step 3710]. Sensor node 205 may read the “destination node ID” field 3210 from datagram 3200 [step 3715]. Sensor node 205 may further inspect forwarding table 2105 to locate a table entry 2110 with the “monitor point ID” field 2125 equal to the datagram “destination node ID” field 3210 [step 3720]. Sensor node 205 may then determine if the “valid” field 2145 in the located table entry is zero [step 3725]. If so, sensor node 205 may discard datagram 3200 [step 3730]. If not, sensor node 205 may read the “next hop” field 2135 of the located table entry [step 3805] and may transmit datagram 3200 to the node identified by the “next hop” field 2135 [step 3810].
  • At step 3905, sensor node 205 may determine if the datagram “destination node ID” field 3210 is equal to sensor node 205's unique identifier [step 3905]. If so, sensor node 205 may read the datagram “sensor message” field 3230 [step 3910]. If not, sensor node 205 may determine if the “# of Node IDs Appended” field 3245 is equal to zero [step 3915]. If so, sensor node 205 may discard datagram 3200 [step 3920]. If not, sensor node 205 may choose the last “hop node ID” 3250 from datagram 3200 as the next hop for the datagram [step 3925]. Sensor node 205 may remove this “Hop node ID” field 3250 from datagram 3200 and decrement the “# of Node IDs Appended” field 3245 [step 3930]. Sensor node 205 may then transmit the datagram to the next hop identified by the removed “hop node ID” field 3250 [step 3935].
  • Exemplary Monitor Point Datagram Processing
  • FIGS. 40-43 illustrate exemplary processing, consistent with the present invention, for processing datagrams 3200 received at a monitor point 105. As one skilled in the art will appreciate, the method exemplified by FIGS. 40-43 can be implemented as a sequence of instructions and stored in memory 2020 of monitor point 105 for execution by processing unit 2015.
  • Monitor point 105 may begin processing by receiving a datagram 3200 from a sensor node 205 in network 100 [step 4005]. Monitor point 105 may, optionally, calculate a checksum value for the datagram 3200 and compare the calculated value with the datagram “checksum” field 3215 [step 4010]. If the checksums do not agree [step 4015], monitor point 105 may discard the datagram 3200 [step 4020]. If the checksums do agree, monitor point 105 may inspect the datagram “source node ID” field 3205 and compare the field with the “sensor ID” field 2215 in all entries 2210 of monitor point table 2205 [step 4025]. If this inspection determines that the source node is unknown, then monitor point 105 may create a new table entry 2210 for the sensor node identified by the datagram “source node ID” field 3205 [step 4035]. Monitor point 105 may further store the datagram “source node ID” field 3205 in the “sensor ID” field 2215 of the newly created table entry [step 4040].
  • If the source node is known, then monitor point 105 may store the datagram “geo-location “field 3225 in the table “geo-location” field 2220 [step 4045]. Monitor point 105 may further store the datagram “sensor message” field 3230 in the table “sensor message” field 2225 [step 4105]. Monitor point 105 may determine if datagram 3200 includes a “reverse path” flag 3235 [step 4110]. If not, processing may continue at step 4125. If datagram 3200 does include a “reverse path” flag, then monitor point 105 may store the datagram “# of Node IDs Appended” field 3245 in the table “# of Nodes” field 2230 [step 4115]. Monitor point 105 may store the datagram “1st hop node ID” field 3250 a through “Nth hop node ID” field 3250N in reverse order in table 2205 by storing in the table “Nth hop” 2235N through “1st hop” fields 2235 a [step 4120].
  • Monitor point 105 may, optionally, retrieve selected data from monitor point table 2205 and exchange data, via network 115, with other monitor points in network 100 [step 4125]. Monitor point 105 may further determine if a datagram 3200 should be sent to a sensor node 205 in network 100 [step 4130]. Monitor point 105 may, for example, periodically send operation control data to a sensor node 205. If no datagram 3200 is to be sent to a sensor node 205, processing may return to step 4005. If a datagram 3200 is to be sent to a sensor node 205, monitor point 105 may insert its own unique identifier in the datagram 3200 “source node ID” field 3205 [step 4205].
  • Monitor point 105 may insert the table 2205 “sensor ID” field 2215 corresponding to the destination sensor node 205 in the datagram “destination node ID” field 3210 [step 4210]. Monitor point 105 may insert a value in the datagram “TTL” field 3220 [step 4215]. Monitor point 105 may further insert the monitor point's location in the datagram “geo-location” field 3225 [step 4220]. Monitor point 105 may further formulate a sensor message and insert the message in the datagram “sensor message” field 3230 [step 4225]. Monitor point 105 may also set the “direction” indicator field 3240 to “outbound” [step 4230] and may insert the table “# of Nodes” field 2230, corresponding to the table entry 2210 with the appropriate “sensor ID” 2215, into the datagram “# of Node IDs Appended” field 3245 [step 4235].
  • Monitor point 105 may insert the table “1st Hop” field 2235 a through “Nth hop” field 2235N into the corresponding datagram “1st Hop Node ID” 3250 a through “Nth Hop Node ID” 3250N fields [step 4305]. Monitor point 105 may calculate a checksum value for the datagram 3200 and insert the calculated value in the datagram “checksum” field 3215 [step 4310]. Monitor point 105 may transmit datagram 3200 to the first hop identified by the datagram “1st Hop Node ID” field 3250 a [step 4315].
  • CONCLUSION
  • Systems and methods consistent with the present invention, therefore, provide mechanisms that enable sensor node transmitters and receivers to be turned off, and remain in a “sleep” state, for substantial periods, thus, increasing the energy efficiency of the nodes. Systems and methods consistent with the present invention further implement transmission and reception schedules that permit the reception and forwarding of packets containing routing, or other types of data, during short periods when the sensor node transmitters and receivers are powered up and, thus, “awake.” The present invention, thus, increases sensor node operationallife by reducing energy consumption while permitting the reception and forwarding of the routing messages needed to self-organize the distributed network.
  • The foregoing description of exemplary embodiments of the present invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, while certain components of the invention have been described as implemented in hardware and others in software, other hardware/software configurations may be possible. Also, while series of steps have been described with regard to FIGS. 10-16, 24-25, 27-31, and 33-43, the order of the steps is not critical.
  • No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. The scope of the invention is defined by the following claims and their equivalents.

Claims (44)

What is claimed is:
1.-18. (canceled)
19. A method, comprising:
receiving, at a wireless destination node in a wireless multi-node network, a beacon message noting a presence of another node;
broadcasting, at the wireless destination node, a routing message to at least one other node in the wireless multi-node network in response to the received beacon message;
receiving, at the wireless destination node, an additional message comprising:
a unique identifier for a wireless source node, and
sequencing data indicating a sequence of the message;
extracting distance data indicating a number of hops to reach the wireless destination node from the wireless source node in the wireless network; and
updating a forwarding table, wherein the forwarding table includes identifiers for neighboring sensor nodes.
20. The method of claim 19, further comprising:
organizing nodes in the network into a hierarchy of tiers.
21. The method of claim 20, wherein the at least one other node resides in a higher tier than the destination node.
22. The method of claim 20, wherein the additional message is destined for a data collection point residing in a lowest tier of the network.
23. A node, comprising:
a receiver configured to receive, at a wireless destination node in a wireless multi-node network, a beacon message noting a presence of another node;
a transmitter configured to broadcast, at the wireless destination node, a routing message to at least one other node in the wireless multi-node network in response to the received beacon message;
wherein the node is operable such that the receiver is further configured to receive, at the wireless destination node, an additional message comprising:
a unique identifier for a wireless source node; and
sequencing data indicating a sequence of the message;
a processing unit configured to:
extract distance data indicating a number of hops to reach the wireless destination node from the wireless source node in the wireless network, and
update a forwarding table, wherein the forwarding table includes identifiers for neighboring sensor nodes.
24. A computer program product on a non-transitory computer-readable medium, comprising:
computer code for receiving, at a wireless destination node, a beacon message noting a presence of another node;
computer code for broadcasting, at the wireless destination node, a routing message to at least one other node in a wireless multi-node network in response to the received beacon message;
computer code for receiving, at the wireless destination node, an additional message comprising:
a unique identifier for a wireless source node, and
sequencing data indicating a sequence of the message;
computer code for extracting distance data indicating a number of hops to reach the wireless destination node from the wireless source node in the wireless multi-node network; and
computer code for updating a forwarding table, wherein the forwarding table includes identifiers for neighboring sensor nodes.
25.-34. (canceled)
35. The node of claim 23, wherein the wireless multi-node network is an ad hoc network.
36. The node of claim 23 wherein the wireless multi-node network is a wireless sensor network.
37. The node of claim 23, wherein the additional message further comprises data for detecting errors in the additional message.
38. The node of claim 37, wherein the data for detecting errors comprises a message checksum value.
39. The node of claim 23, wherein the wireless source node is a sensor node.
40. The node of claim 39, wherein the processing unit is further configured to perform a plurality of measurements over a sampling period.
41. The node of claim 40, wherein the processing unit is further configured to receive, from the wireless source node by the wireless destination node, the plurality of measurements via a plurality of packets.
42. The node of claim 23, wherein the additional message is a packet.
43. The node of claim 23, wherein the wireless multi-node network is a packet-switched network.
44. The node of claim 23, wherein the receiver is configured to receive the additional message in accordance with a Time Division Multiple Access plan.
45. The node of claim 23, wherein the wireless destination node comprises a processor, a clock, and a power supply.
46. The node of claim 45, wherein the clock of the wireless destination node is synchronized to an external clock base.
47. The node of claim 23 wherein the wireless destination node is equipped to reduce clock drift.
48. The node of claim 23, wherein the additional message is transmitted within pre-defined time slots.
49. The node of claim 23, wherein the wireless multi-node network includes a pseudo-random number generator.
50. The node of claim 49, wherein the additional message is encrypted for improving network security, and wherein the pseudo-random number generator facilitates the encryption of the message.
51. The node of claim 23, wherein the additional message is received via an intermediary node.
52. The node of claim 23, wherein the wireless source node includes at least one of random access memory (RAM) and read only memory (ROM).
53. The node of claim 23, wherein the wireless multi-node network includes a database.
54. The node of claim 23, wherein the wireless multi-node network is a radio frequency network.
55. The node of claim 23, wherein the wireless source node is portable.
56. The node of claim 23, wherein the wireless source node is permitted to join and leave the wireless multi-node network.
57. The node of claim 23, wherein the wireless source node is an endpoint device.
58. The node of claim 23, wherein the receiver is further configured to receive, by the wireless source node, an ACK signal upon sending of the message.
59. The node of claim 23, wherein the processing unit is further configured to listen for a beacon.
60. The node of claim 59, wherein the beacon is a wireless broadcast message.
61. The node of claim 59, wherein the beacon is transmitted at preset times.
62. The node of claim 23, wherein the additional message further comprises a unique identifier for the wireless destination node.
63. The node of claim 23, wherein the wireless destination node includes a data buffer.
64. The node of claim 23, wherein the additional message includes a header.
65. The node of claim 23, wherein the sequencing data comprises a counter number.
66. The node of claim 23, wherein the processing unit is further configured to store, in the forwarding table, the unique identifier for the wireless source node.
67. The node of claim 23, wherein the forwarding table includes a plurality of entries.
68. The node of claim 23, wherein the forwarding table is stored in the wireless destination node.
69. The node of claim 23, wherein the processing unit is further configured to inspect, by the wireless source node, the forwarding table for identifying a table entry.
70. The node of claim 69, wherein the processing unit is further configured to transmit a subsequent message based on the identifying table entry.
US12/537,085 2001-11-30 2009-08-06 Energy efficient forwarding in ad-hoc wireless networks Abandoned US20150029914A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US12/537,085 US20150029914A1 (en) 2001-11-30 2009-08-06 Energy efficient forwarding in ad-hoc wireless networks
US14/538,563 US9674858B2 (en) 2001-11-30 2014-11-11 Receiver scheduling in wireless networks
US15/409,055 US10588139B2 (en) 2001-11-30 2017-01-18 Scheduling communications in a wireless network
US16/807,311 US10863528B2 (en) 2001-11-30 2020-03-03 Scheduling communications in a wireless network
US17/114,624 US11445523B2 (en) 2002-12-23 2020-12-08 Scheduling communications in a wireless network

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US09/998,946 US7020501B1 (en) 2001-11-30 2001-11-30 Energy efficient forwarding in ad-hoc wireless networks
US10/328,566 US7421257B1 (en) 2001-11-30 2002-12-23 Receiver scheduling in ad hoc wireless networks
US12/174,512 US7623897B1 (en) 2001-11-30 2008-07-16 Receiver scheduling in ad hoc wireless networks
US12/253,130 US7979096B1 (en) 2001-11-30 2008-10-16 Energy efficient forwarding in ad-hoc wireless networks
US12/537,085 US20150029914A1 (en) 2001-11-30 2009-08-06 Energy efficient forwarding in ad-hoc wireless networks

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/253,130 Continuation US7979096B1 (en) 2001-11-30 2008-10-16 Energy efficient forwarding in ad-hoc wireless networks

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/538,563 Continuation US9674858B2 (en) 2001-11-30 2014-11-11 Receiver scheduling in wireless networks

Publications (1)

Publication Number Publication Date
US20150029914A1 true US20150029914A1 (en) 2015-01-29

Family

ID=75491817

Family Applications (9)

Application Number Title Priority Date Filing Date
US10/328,566 Active 2024-06-25 US7421257B1 (en) 2001-11-30 2002-12-23 Receiver scheduling in ad hoc wireless networks
US12/174,512 Expired - Lifetime US7623897B1 (en) 2001-11-30 2008-07-16 Receiver scheduling in ad hoc wireless networks
US12/253,130 Expired - Fee Related US7979096B1 (en) 2001-11-30 2008-10-16 Energy efficient forwarding in ad-hoc wireless networks
US12/537,085 Abandoned US20150029914A1 (en) 2001-11-30 2009-08-06 Energy efficient forwarding in ad-hoc wireless networks
US12/537,010 Expired - Fee Related US7979098B1 (en) 2001-11-30 2009-08-06 Receiver scheduling in ad hoc wireless networks
US14/538,563 Expired - Lifetime US9674858B2 (en) 2001-11-30 2014-11-11 Receiver scheduling in wireless networks
US15/409,055 Expired - Fee Related US10588139B2 (en) 2001-11-30 2017-01-18 Scheduling communications in a wireless network
US16/807,311 Expired - Lifetime US10863528B2 (en) 2001-11-30 2020-03-03 Scheduling communications in a wireless network
US17/114,624 Expired - Lifetime US11445523B2 (en) 2002-12-23 2020-12-08 Scheduling communications in a wireless network

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US10/328,566 Active 2024-06-25 US7421257B1 (en) 2001-11-30 2002-12-23 Receiver scheduling in ad hoc wireless networks
US12/174,512 Expired - Lifetime US7623897B1 (en) 2001-11-30 2008-07-16 Receiver scheduling in ad hoc wireless networks
US12/253,130 Expired - Fee Related US7979096B1 (en) 2001-11-30 2008-10-16 Energy efficient forwarding in ad-hoc wireless networks

Family Applications After (5)

Application Number Title Priority Date Filing Date
US12/537,010 Expired - Fee Related US7979098B1 (en) 2001-11-30 2009-08-06 Receiver scheduling in ad hoc wireless networks
US14/538,563 Expired - Lifetime US9674858B2 (en) 2001-11-30 2014-11-11 Receiver scheduling in wireless networks
US15/409,055 Expired - Fee Related US10588139B2 (en) 2001-11-30 2017-01-18 Scheduling communications in a wireless network
US16/807,311 Expired - Lifetime US10863528B2 (en) 2001-11-30 2020-03-03 Scheduling communications in a wireless network
US17/114,624 Expired - Lifetime US11445523B2 (en) 2002-12-23 2020-12-08 Scheduling communications in a wireless network

Country Status (1)

Country Link
US (9) US7421257B1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150120863A1 (en) * 2013-10-25 2015-04-30 Qualcomm Incorporated Proxy network device selection in a communication network
US9257036B2 (en) * 2012-05-10 2016-02-09 University of Alaska Anchorage Long lifespan wireless sensors and sensor network
US9503969B1 (en) * 2015-08-25 2016-11-22 Afero, Inc. Apparatus and method for a dynamic scan interval for a wireless device
US20170171747A1 (en) * 2015-12-14 2017-06-15 Afero, Inc. System and method for establishing a secondary communication channel to control an internet of things (iot) device
US9843929B2 (en) 2015-08-21 2017-12-12 Afero, Inc. Apparatus and method for sharing WiFi security data in an internet of things (IoT) system
US10419342B2 (en) * 2015-09-15 2019-09-17 At&T Mobility Ii Llc Gateways for sensor data packets in cellular networks
US10447784B2 (en) 2015-12-14 2019-10-15 Afero, Inc. Apparatus and method for modifying packet interval timing to identify a data transfer condition
US10560909B2 (en) * 2017-02-17 2020-02-11 Texas Instrumetns Incorporated Signal analyzer and synchronizer for networks
WO2020150349A1 (en) * 2019-01-15 2020-07-23 Loud-Hailer, Inc. Device presence method and system for mesh network management
US10805344B2 (en) 2015-12-14 2020-10-13 Afero, Inc. Apparatus and method for obscuring wireless communication patterns
WO2021006932A1 (en) * 2019-07-08 2021-01-14 Middle Chart, LLC Spatially self-verifying array of nodes
DE102020113236A1 (en) 2020-05-15 2021-11-18 Krohne Messtechnik Gmbh Method for locating a network device in a mesh network and corresponding mesh network

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7421257B1 (en) 2001-11-30 2008-09-02 Stragent, Llc Receiver scheduling in ad hoc wireless networks
AU2003287181A1 (en) * 2002-10-24 2004-05-13 Bbnt Solutions Llc Spectrum-adaptive networking
CN101442712B (en) * 2003-11-07 2013-07-17 株式会社日立制作所 Radio communicating system, base station, radio terminal and radio communication method
US7523096B2 (en) 2003-12-03 2009-04-21 Google Inc. Methods and systems for personalized network searching
US7653042B2 (en) * 2004-02-27 2010-01-26 Alcatel-Lucent Usa Inc. Method of burst scheduling in a communication network
KR100619068B1 (en) * 2005-01-31 2006-08-31 삼성전자주식회사 Method for allocating channel time for peer-to-peer communication in Wireless Universal Serial Bus and the method for communicating using the same
US7979048B2 (en) * 2005-09-15 2011-07-12 Silicon Laboratories Inc. Quasi non-volatile memory for use in a receiver
US9838979B2 (en) * 2006-05-22 2017-12-05 Apple Inc. Power efficient wireless network detection
US8320244B2 (en) * 2006-06-30 2012-11-27 Qualcomm Incorporated Reservation based MAC protocol
US8493955B2 (en) 2007-01-05 2013-07-23 Qualcomm Incorporated Interference mitigation mechanism to enable spatial reuse in UWB networks
US20090016251A1 (en) * 2007-07-13 2009-01-15 Gainspan, Inc. Management method and system of low power consuming devices
US20090147714A1 (en) * 2007-12-05 2009-06-11 Praval Jain Method and system for reducing power consumption in wireless sensor networks
WO2009083018A1 (en) * 2007-12-27 2009-07-09 Siemens Aktiengesellschaft Energy-efficient operation of a communication network
EP2241014B1 (en) * 2008-02-05 2016-11-02 Philips Lighting Holding B.V. Controlling the power consumption of a receiving unit
US8134497B2 (en) 2008-09-30 2012-03-13 Trimble Navigation Limited Method and system for location-dependent time-specific correction data
US8139504B2 (en) 2009-04-07 2012-03-20 Raytheon Bbn Technologies Corp. System, device, and method for unifying differently-routed networks using virtual topology representations
DE102009026124A1 (en) * 2009-07-07 2011-01-13 Elan Schaltelemente Gmbh & Co. Kg Method and system for acquisition, transmission and evaluation of safety-related signals
JP5721713B2 (en) * 2009-07-23 2015-05-20 ノキア コーポレイション Method and apparatus for reducing power consumption when operating as a Bluetooth Low Energy device
US9420385B2 (en) 2009-12-21 2016-08-16 Starkey Laboratories, Inc. Low power intermittent messaging for hearing assistance devices
US8760995B1 (en) 2010-07-08 2014-06-24 Amdocs Software Systems Limited System, method, and computer program for routing data in a wireless sensor network
DE102010034521B4 (en) 2010-08-16 2018-08-16 Atmel Corp. Receiver and method for receiving by a receiver of a node in a radio network
TWI511471B (en) * 2010-08-16 2015-12-01 Atmel Corp Receiver and method for the reception of a node by a receiver in a wireless network
US9899882B2 (en) * 2010-12-20 2018-02-20 Qualcomm Incorporated Wireless power peer to peer communication
TWI528760B (en) * 2011-12-22 2016-04-01 萬國商業機器公司 Method for routing data in a wireless sensor network
US9325792B2 (en) * 2012-11-07 2016-04-26 Microsoft Technology Licensing, Llc Aggregation framework using low-power alert sensor
US9172517B2 (en) * 2013-06-04 2015-10-27 Texas Instruments Incorporated Network power optimization via white lists
US9565575B2 (en) * 2013-07-25 2017-02-07 Honeywell International Inc. Interference avoidance technique for wireless networks used in critical applications
US9614580B2 (en) 2015-01-12 2017-04-04 Texas Instruments Incorporated Network device with frequency hopping sequences for all channel-numbers for channel hopping with blacklisting
US10028220B2 (en) * 2015-01-27 2018-07-17 Locix, Inc. Systems and methods for providing wireless asymmetric network architectures of wireless devices with power management features
WO2017018936A1 (en) * 2015-07-24 2017-02-02 Voxp Pte Ltd System and method for relaying information
US10112300B2 (en) * 2016-02-25 2018-10-30 King Faud University Of Petroleum And Minerals Apparatus and method of sensor deployment
US10455350B2 (en) 2016-07-10 2019-10-22 ZaiNar, Inc. Method and system for radiolocation asset tracking via a mesh network
US10362374B2 (en) 2016-10-27 2019-07-23 Itron, Inc. Discovery mechanism for communication in wireless networks
GB2557992B (en) * 2016-12-21 2021-08-18 Texecom Ltd Frequency hopping spread spectrum communication in mesh networks
US10554369B2 (en) 2016-12-30 2020-02-04 Itron, Inc. Group acknowledgement message efficiency
WO2018125233A1 (en) * 2016-12-30 2018-07-05 Agerstam Mats Mechanism for efficient data reporting in iiot wsn
CN110495220B (en) * 2017-03-31 2021-11-02 中兴通讯股份有限公司 Method and apparatus for low power device synchronization
US11102698B2 (en) * 2019-12-30 2021-08-24 Prince Sultan University Tabu node selection with minimum spanning tree for WSNs

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6028857A (en) * 1997-07-25 2000-02-22 Massachusetts Institute Of Technology Self-organizing network
US20020027894A1 (en) * 2000-04-12 2002-03-07 Jori Arrakoski Generation broadband wireless internet, and associated method, therefor
US6816460B1 (en) * 2000-03-14 2004-11-09 Lucent Technologies Inc. Location based routing for mobile ad-hoc networks

Family Cites Families (121)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL130532C (en) * 1961-02-09
US5495482A (en) 1989-09-29 1996-02-27 Motorola Inc. Packet transmission system and method utilizing both a data bus and dedicated control lines
AU7788191A (en) 1989-11-22 1991-06-13 David C. Russell Computer control system
US5117430A (en) * 1991-02-08 1992-05-26 International Business Machines Corporation Apparatus and method for communicating between nodes in a network
JP2730810B2 (en) 1991-05-10 1998-03-25 シャープ株式会社 Information processing device
US5940771A (en) * 1991-05-13 1999-08-17 Norand Corporation Network supporting roaming, sleeping terminals
US5297142A (en) 1991-07-18 1994-03-22 Motorola, Inc. Data transfer method and apparatus for communication between a peripheral and a master
US5247285A (en) 1992-03-06 1993-09-21 Everex Systems, Inc. Standup portable personal computer with detachable wireless keyboard and adjustable display
US5371764A (en) 1992-06-26 1994-12-06 International Business Machines Corporation Method and apparatus for providing an uninterrupted clock signal in a data processing system
US5371734A (en) 1993-01-29 1994-12-06 Digital Ocean, Inc. Medium access control protocol for wireless network
GB9304622D0 (en) * 1993-03-06 1993-04-21 Ncr Int Inc Wireless local area network apparatus
GB9304638D0 (en) 1993-03-06 1993-04-21 Ncr Int Inc Wireless data communication system having power saving function
DE69305734T2 (en) 1993-06-30 1997-05-15 Ibm Programmable high-performance data communication adaptation for high-speed packet transmission networks
JP3220599B2 (en) 1993-12-28 2001-10-22 三菱電機株式会社 Data queuing device
US6292508B1 (en) * 1994-03-03 2001-09-18 Proxim, Inc. Method and apparatus for managing power in a frequency hopping medium access control protocol
EP0676878A1 (en) 1994-04-07 1995-10-11 International Business Machines Corporation Efficient point to point and multi point routing mechanism for programmable packet switching nodes in high speed data transmission networks
US5515369A (en) 1994-06-24 1996-05-07 Metricom, Inc. Method for frequency sharing and frequency punchout in frequency hopping communications network
US5541912A (en) 1994-10-04 1996-07-30 At&T Corp. Dynamic queue length thresholds in a shared memory ATM switch
US6128331A (en) 1994-11-07 2000-10-03 Cisco Systems, Inc. Correlation system for use in wireless direct sequence spread spectrum systems
US5583866A (en) * 1994-12-05 1996-12-10 Motorola, Inc. Method for delivering broadcast packets in a frequency hopping local area network
IL117221A0 (en) 1995-02-28 1996-06-18 Gen Instrument Corp Configurable hybrid medium access control for cable metropolitan area networks
US5604735A (en) 1995-03-15 1997-02-18 Finisar Corporation High speed network switch
US6097707A (en) 1995-05-19 2000-08-01 Hodzic; Migdat I. Adaptive digital wireless communications network apparatus and process
US5752202A (en) 1995-06-07 1998-05-12 Motorola, Inc. Method of message delivery adapted for a power conservation system
US5598419A (en) 1995-07-17 1997-01-28 National Semiconductor Corporation Dynamic synchronization code detection window
US5832492A (en) 1995-09-05 1998-11-03 Compaq Computer Corporation Method of scheduling interrupts to the linked lists of transfer descriptors scheduled at intervals on a serial bus
US5737328A (en) 1995-10-04 1998-04-07 Aironet Wireless Communications, Inc. Network communication system with information rerouting capabilities
US5721733A (en) * 1995-10-13 1998-02-24 General Wireless Communications, Inc. Wireless network access scheme
US5680768A (en) 1996-01-24 1997-10-28 Hughes Electronics Concentric pulse tube expander with vacuum insulator
US5699357A (en) 1996-03-06 1997-12-16 Bbn Corporation Personal data network
EP0802655A3 (en) 1996-04-17 1999-11-24 Matsushita Electric Industrial Co., Ltd. Communication network
US5781028A (en) 1996-06-21 1998-07-14 Microsoft Corporation System and method for a switched data bus termination
US5896375A (en) 1996-07-23 1999-04-20 Ericsson Inc. Short-range radio communications system and method of use
US6308061B1 (en) * 1996-08-07 2001-10-23 Telxon Corporation Wireless software upgrades with version control
US5848064A (en) 1996-08-07 1998-12-08 Telxon Corporation Wireless software upgrades with version control
US5857080A (en) 1996-09-10 1999-01-05 Lsi Logic Corporation Apparatus and method for address translation in bus bridge devices
US6434158B1 (en) 1996-10-15 2002-08-13 Motorola, Inc. Entryway system using proximity-based short-range wireless links
US6069896A (en) 1996-10-15 2000-05-30 Motorola, Inc. Capability addressable network and method therefor
US6487180B1 (en) 1996-10-15 2002-11-26 Motorola, Inc. Personal information system using proximity-based short-range wireless links
US6434159B1 (en) 1996-10-15 2002-08-13 Motorola, Inc. Transaction system and method therefor
US6424623B1 (en) 1996-10-15 2002-07-23 Motorola, Inc. Virtual queuing system using proximity-based short-range wireless links
US6842430B1 (en) * 1996-10-16 2005-01-11 Koninklijke Philips Electronics N.V. Method for configuring and routing data within a wireless multihop network and a wireless network for implementing the same
US6000011A (en) 1996-12-09 1999-12-07 International Business Machines Corporation Multi-entry fully associative transition cache
US6011784A (en) 1996-12-18 2000-01-04 Motorola, Inc. Communication system and method using asynchronous and isochronous spectrum for voice and data
US5909183A (en) 1996-12-26 1999-06-01 Motorola, Inc. Interactive appliance remote controller, system and method
GB9720856D0 (en) * 1997-10-01 1997-12-03 Olivetti Telemedia Spa Mobile networking
US6331972B1 (en) 1997-02-03 2001-12-18 Motorola, Inc. Personal data storage and transaction device system and method
CN1259249A (en) 1997-06-02 2000-07-05 摩托罗拉公司 Method for authorizing couplings between devices in a capability addressable network
US6097733A (en) 1997-06-13 2000-08-01 Nortel Networks Corporation System and associated method of operation for managing bandwidth in a wireless communication system supporting multimedia communications
US5933611A (en) 1997-06-23 1999-08-03 Opti Inc. Dynamic scheduler for time multiplexed serial bus
US6094435A (en) 1997-06-30 2000-07-25 Sun Microsystems, Inc. System and method for a quality of service in a multi-layer network element
US6005854A (en) 1997-08-08 1999-12-21 Cwill Telecommunication, Inc. Synchronous wireless access protocol method and apparatus
GB2328046B (en) * 1997-08-08 2002-06-05 Ibm Data processing network
US6026297A (en) 1997-09-17 2000-02-15 Telefonaktiebolaget Lm Ericsson Contemporaneous connectivity to multiple piconets
US6590928B1 (en) 1997-09-17 2003-07-08 Telefonaktiebolaget Lm Ericsson (Publ) Frequency hopping piconets in an uncoordinated wireless multi-user system
US5903777A (en) 1997-10-02 1999-05-11 National Semiconductor Corp. Increasing the availability of the universal serial bus interconnects
GB9721008D0 (en) * 1997-10-03 1997-12-03 Hewlett Packard Co Power management method foruse in a wireless local area network (LAN)
US6115390A (en) 1997-10-14 2000-09-05 Lucent Technologies, Inc. Bandwidth reservation and collision resolution method for multiple access communication networks where remote hosts send reservation requests to a base station for randomly chosen minislots
US5974327A (en) 1997-10-21 1999-10-26 At&T Corp. Adaptive frequency channel assignment based on battery power level in wireless access protocols
KR100250477B1 (en) 1997-12-06 2000-04-01 정선종 Location tracking method of mobile terminal using radio lan
US6079033A (en) 1997-12-11 2000-06-20 Intel Corporation Self-monitoring distributed hardware systems
US6011486A (en) 1997-12-16 2000-01-04 Intel Corporation Electronic paging device including a computer connection port
US6570857B1 (en) 1998-01-13 2003-05-27 Telefonaktiebolaget L M Ericsson Central multiple access control for frequency hopping radio networks
US6249740B1 (en) 1998-01-21 2001-06-19 Kabushikikaisha Equos Research Communications navigation system, and navigation base apparatus and vehicle navigation apparatus both used in the navigation system
JP3609599B2 (en) 1998-01-30 2005-01-12 富士通株式会社 Node proxy system, node monitoring system, method thereof, and recording medium
US6256682B1 (en) 1998-05-06 2001-07-03 Apple Computer, Inc. Signaling of power modes over an interface bus
US6748451B2 (en) * 1998-05-26 2004-06-08 Dow Global Technologies Inc. Distributed computing environment using real-time scheduling logic and time deterministic architecture
JPH11341538A (en) * 1998-05-29 1999-12-10 Nec Shizuoka Ltd Radio communication equipment
US6067301A (en) 1998-05-29 2000-05-23 Cabletron Systems, Inc. Method and apparatus for forwarding packets from a plurality of contending queues to an output
TW432840B (en) * 1998-06-03 2001-05-01 Sony Corp Communication control method, system, and device
US6715071B2 (en) 1998-06-26 2004-03-30 Canon Kabushiki Kaisha System having devices connected via communication lines
US6272140B1 (en) 1998-07-02 2001-08-07 Gte Service Corporation Bandwidth allocation in a wireless personal area network
US6351468B1 (en) 1998-07-02 2002-02-26 Gte Service Corporation Communications protocol in a wireless personal area network
US6314091B1 (en) 1998-07-02 2001-11-06 Gte Service Corporation Wireless personal area network with automatic detachment
US6208247B1 (en) * 1998-08-18 2001-03-27 Rockwell Science Center, Llc Wireless integrated sensor network using multiple relayed communications
JP3444475B2 (en) * 1998-10-22 2003-09-08 インターナショナル・ビジネス・マシーンズ・コーポレーション Response determination method, communication method, and wireless transceiver
US20030146871A1 (en) 1998-11-24 2003-08-07 Tracbeam Llc Wireless location using signal direction and time difference of arrival
US6272567B1 (en) 1998-11-24 2001-08-07 Nexabit Networks, Inc. System for interposing a multi-port internally cached DRAM in a control path for temporarily storing multicast start of packet data until such can be passed
US6279060B1 (en) 1998-12-04 2001-08-21 In-System Design, Inc. Universal serial bus peripheral bridge simulates a device disconnect condition to a host when the device is in a not-ready condition to avoid wasting bus resources
EP1022876B1 (en) 1999-01-25 2006-04-19 International Business Machines Corporation Service advertisements in wireless local networks
US7184413B2 (en) * 1999-02-10 2007-02-27 Nokia Inc. Adaptive communication protocol for wireless networks
US6414955B1 (en) * 1999-03-23 2002-07-02 Innovative Technology Licensing, Llc Distributed topology learning method and apparatus for wireless networks
US6807163B1 (en) * 1999-04-01 2004-10-19 Ericsson Inc. Adaptive rate channel scanning method for TDMA wireless communications
WO2000068811A1 (en) 1999-04-30 2000-11-16 Network Forensics, Inc. System and method for capturing network data and identifying network events therefrom
EP1232614A2 (en) * 1999-05-28 2002-08-21 Basic Resources, Inc. Wireless network employing node-to-node data messaging
US6574266B1 (en) 1999-06-25 2003-06-03 Telefonaktiebolaget Lm Ericsson (Publ) Base-station-assisted terminal-to-terminal connection setup
JP4172120B2 (en) 1999-06-29 2008-10-29 ソニー株式会社 COMMUNICATION DEVICE AND COMMUNICATION METHOD, COMMUNICATION TERMINAL DEVICE
US6415342B1 (en) 1999-07-27 2002-07-02 Hewlett-Packard Company Universal serial bus controlled connect and disconnect
US6532220B1 (en) 1999-08-27 2003-03-11 Tachyon, Inc. System and method for efficient channel assignment
US6492904B2 (en) * 1999-09-27 2002-12-10 Time Domain Corporation Method and system for coordinating timing among ultrawideband transmissions
US6859831B1 (en) 1999-10-06 2005-02-22 Sensoria Corporation Method and apparatus for internetworked wireless integrated network sensor (WINS) nodes
US7020701B1 (en) 1999-10-06 2006-03-28 Sensoria Corporation Method for collecting and processing data using internetworked wireless integrated network sensors (WINS)
US6385174B1 (en) * 1999-11-12 2002-05-07 Itt Manufacturing Enterprises, Inc. Method and apparatus for transmission of node link status messages throughout a network with reduced communication protocol overhead traffic
US6593768B1 (en) 1999-11-18 2003-07-15 Intel Corporation Dual termination serial data bus with pull-up current source
US6704293B1 (en) 1999-12-06 2004-03-09 Telefonaktiebolaget Lm Ericsson (Publ) Broadcast as a triggering mechanism for route discovery in ad-hoc networks
US6751200B1 (en) * 1999-12-06 2004-06-15 Telefonaktiebolaget Lm Ericsson (Publ) Route discovery based piconet forming
CA2292828A1 (en) 1999-12-22 2001-06-22 Nortel Networks Corporation Method and apparatus for traffic flow control in data switches
US6694149B1 (en) * 1999-12-22 2004-02-17 Motorola, Inc. Method and apparatus for reducing power consumption in a network device
US6505052B1 (en) * 2000-02-01 2003-01-07 Qualcomm, Incorporated System for transmitting and receiving short message service (SMS) messages
WO2001059977A2 (en) * 2000-02-08 2001-08-16 Personal Electronic Devices, Inc. Intelligent data network
US6816510B1 (en) * 2000-02-09 2004-11-09 Koninklijke Philips Electronics N.V. Method for clock synchronization between nodes in a packet network
US6775258B1 (en) 2000-03-17 2004-08-10 Nokia Corporation Apparatus, and associated method, for routing packet data in an ad hoc, wireless communication system
US6977895B1 (en) 2000-03-23 2005-12-20 Cisco Technology, Inc. Apparatus and method for rate-based polling of input interface queues in networking devices
US7386003B1 (en) 2000-03-27 2008-06-10 Bbn Technologies Corp. Systems and methods for communicating in a personal area network
US6804232B1 (en) 2000-03-27 2004-10-12 Bbnt Solutions Llc Personal area network with automatic attachment and detachment
US6535947B1 (en) 2000-05-30 2003-03-18 International Business Machines Corporation Methods, systems and computer program products for logically segmenting devices on a universal serial bus
JP4170566B2 (en) 2000-07-06 2008-10-22 インターナショナル・ビジネス・マシーンズ・コーポレーション Communication method, wireless ad hoc network, communication terminal, and Bluetooth terminal
US7103344B2 (en) * 2000-06-08 2006-09-05 Menard Raymond J Device with passive receiver
US6381467B1 (en) * 2000-06-22 2002-04-30 Motorola, Inc. Method and apparatus for managing an ad hoc wireless network
US6973039B2 (en) 2000-12-08 2005-12-06 Bbnt Solutions Llc Mechanism for performing energy-based routing in wireless networks
US7209771B2 (en) * 2000-12-22 2007-04-24 Terahop Networks, Inc. Battery powered wireless transceiver having LPRF component and second wake up receiver
US7035240B1 (en) * 2000-12-27 2006-04-25 Massachusetts Institute Of Technology Method for low-energy adaptive clustering hierarchy
EP1386432A4 (en) * 2001-03-21 2009-07-15 John A Stine An access and routing protocol for ad hoc networks using synchronous collision resolution and node state dissemination
US7072975B2 (en) * 2001-04-24 2006-07-04 Wideray Corporation Apparatus and method for communicating information to portable computing devices
JP3870717B2 (en) 2001-05-14 2007-01-24 セイコーエプソン株式会社 Data transfer control device and electronic device
US7161926B2 (en) 2001-07-03 2007-01-09 Sensoria Corporation Low-latency multi-hop ad hoc wireless network
US6754188B1 (en) * 2001-09-28 2004-06-22 Meshnetworks, Inc. System and method for enabling a node in an ad-hoc packet-switched wireless communications network to route packets based on packet content
US20030066090A1 (en) * 2001-09-28 2003-04-03 Brendan Traw Method and apparatus to provide a personalized channel
US7421257B1 (en) * 2001-11-30 2008-09-02 Stragent, Llc Receiver scheduling in ad hoc wireless networks
US7020501B1 (en) 2001-11-30 2006-03-28 Bbnt Solutions Llc Energy efficient forwarding in ad-hoc wireless networks
DE10331519A1 (en) 2003-07-11 2005-02-03 Hekuma Gmbh Handling system for an injection molding machine and method for introducing an application into the injection mold

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6028857A (en) * 1997-07-25 2000-02-22 Massachusetts Institute Of Technology Self-organizing network
US6816460B1 (en) * 2000-03-14 2004-11-09 Lucent Technologies Inc. Location based routing for mobile ad-hoc networks
US20020027894A1 (en) * 2000-04-12 2002-03-07 Jori Arrakoski Generation broadband wireless internet, and associated method, therefor

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9257036B2 (en) * 2012-05-10 2016-02-09 University of Alaska Anchorage Long lifespan wireless sensors and sensor network
US20150120863A1 (en) * 2013-10-25 2015-04-30 Qualcomm Incorporated Proxy network device selection in a communication network
US10659961B2 (en) 2015-08-21 2020-05-19 Afero, Inc. Apparatus and method for sharing WiFi security data in an internet of things (IoT) system
US9843929B2 (en) 2015-08-21 2017-12-12 Afero, Inc. Apparatus and method for sharing WiFi security data in an internet of things (IoT) system
US10149154B2 (en) 2015-08-21 2018-12-04 Afero, Inc. Apparatus and method for sharing WiFi security data in an internet of things (IoT) system
US9942837B2 (en) * 2015-08-25 2018-04-10 Afero, Inc. Apparatus and method for a dynamic scan interval for a wireless device
US20170078954A1 (en) * 2015-08-25 2017-03-16 Afero, Inc. Apparatus and method for a dynamic scan interval for a wireless device
US9503969B1 (en) * 2015-08-25 2016-11-22 Afero, Inc. Apparatus and method for a dynamic scan interval for a wireless device
US10419342B2 (en) * 2015-09-15 2019-09-17 At&T Mobility Ii Llc Gateways for sensor data packets in cellular networks
US10805344B2 (en) 2015-12-14 2020-10-13 Afero, Inc. Apparatus and method for obscuring wireless communication patterns
US20170171747A1 (en) * 2015-12-14 2017-06-15 Afero, Inc. System and method for establishing a secondary communication channel to control an internet of things (iot) device
US10091242B2 (en) * 2015-12-14 2018-10-02 Afero, Inc. System and method for establishing a secondary communication channel to control an internet of things (IOT) device
US10447784B2 (en) 2015-12-14 2019-10-15 Afero, Inc. Apparatus and method for modifying packet interval timing to identify a data transfer condition
US10560909B2 (en) * 2017-02-17 2020-02-11 Texas Instrumetns Incorporated Signal analyzer and synchronizer for networks
WO2020150349A1 (en) * 2019-01-15 2020-07-23 Loud-Hailer, Inc. Device presence method and system for mesh network management
US11909623B2 (en) 2019-01-15 2024-02-20 Loud-Hailer, Inc. Device presence method and system for mesh network management
WO2021006932A1 (en) * 2019-07-08 2021-01-14 Middle Chart, LLC Spatially self-verifying array of nodes
DE102020113236A1 (en) 2020-05-15 2021-11-18 Krohne Messtechnik Gmbh Method for locating a network device in a mesh network and corresponding mesh network
US11881099B2 (en) 2020-05-15 2024-01-23 Krohne Messtechnik Gmbh Method for locating a network device in a mesh network and corresponding mesh network

Also Published As

Publication number Publication date
US11445523B2 (en) 2022-09-13
US20200205172A1 (en) 2020-06-25
US10588139B2 (en) 2020-03-10
US7979098B1 (en) 2011-07-12
US9674858B2 (en) 2017-06-06
US7623897B1 (en) 2009-11-24
US10863528B2 (en) 2020-12-08
US7979096B1 (en) 2011-07-12
US20210120567A1 (en) 2021-04-22
US20170135122A1 (en) 2017-05-11
US7421257B1 (en) 2008-09-02
US20150063325A1 (en) 2015-03-05

Similar Documents

Publication Publication Date Title
US20150029914A1 (en) Energy efficient forwarding in ad-hoc wireless networks
US7020501B1 (en) Energy efficient forwarding in ad-hoc wireless networks
US7400595B2 (en) Method and apparatus for battery life extension for nodes within beaconing networks
US10993201B2 (en) Location aware networking for ad-hoc networks and method therefor
CN101479999B (en) Improved 802.11 mesh architecture
US8583768B2 (en) Wireless sensor network and management method for the same
US11784950B2 (en) Mesh networking using peer to peer messages for a hospitality entity
Conner et al. Experimental evaluation of synchronization and topology control for in-building sensor network applications
Gupta et al. Design issues and challenges in wireless sensor networks
EP1867109B1 (en) A sensor network
CN110035417A (en) Use the netted networking of peer message
Guerroumi et al. Hybrid data dissemination protocol (HDDP) for wireless sensor networks
Lee et al. Wireless sensor networks
Choudhury et al. Location-independent coverage in wireless sensor networks
Guo et al. OPS: Opportunistic pipeline scheduling in long-strip wireless sensor networks with unreliable links
Matsui et al. ECORS: Energy consumption-oriented route selection for wireless sensor network
Conner et al. Experimental evaluation of topology control and synchronization for in-building sensor network applications
KR101000795B1 (en) Address-based Wireless Sensor Network and Synchronization Method Thereof
Jabbari et al. Building an energy efficient linear sensor (EELS) infrastructure for smart cities
Omodunbi et al. A review of energy conservation in wireless sensors networks
Santos et al. A geographic routing algorithm for wireless sensor networks
Yoon Power management in wireless sensor networks
Du et al. Efficient energy management protocol for target tracking sensor networks
Baha'A et al. Pendulum: an energy efficient protocol for wireless sensor networks
Wong et al. Waking up sensor networks

Legal Events

Date Code Title Description
AS Assignment

Owner name: TRI-COUNTY EXCELSIOR FOUNDATION, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AZURE NETWORKS, LLC;REEL/FRAME:024915/0521

Effective date: 20100610

AS Assignment

Owner name: AZURE NETWORKS, LLC, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TRI-COUNTY EXCELSIOR FOUNDATION;REEL/FRAME:030943/0873

Effective date: 20130731

AS Assignment

Owner name: BBNT SOLUTIONS LLC, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ELLIOTT, BRIG BARNUM;REEL/FRAME:031012/0506

Effective date: 20021216

Owner name: AZURE NETWORKS, LLC, TEXAS

Free format text: CONFIRMATORY ASSIGNMENT;ASSIGNOR:OSO IP, LLC;REEL/FRAME:031014/0650

Effective date: 20130809

Owner name: OSO IP, LLC, TEXAS

Free format text: CHANGE OF NAME;ASSIGNOR:BALTHER TECHNOLOGIES, LLC;REEL/FRAME:031014/0595

Effective date: 20130524

Owner name: BALTHER TECHNOLOGIES, LLC, TEXAS

Free format text: CHANGE OF NAME;ASSIGNOR:POWER MESH NETWORKS, LLC;REEL/FRAME:031014/0648

Effective date: 20091211

Owner name: AZURE NETWORKS, LLC, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:POWER MESH NETWORKS, LLC;REEL/FRAME:031012/0599

Effective date: 20090820

Owner name: STRAGENT, LLC, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BBN TECHNOLOGIES CORP.;REEL/FRAME:031012/0570

Effective date: 20080707

Owner name: OSO IP, LLC, TEXAS

Free format text: CONFIRMATORY ASSIGNMENT;ASSIGNOR:STRAGENT, LLC;REEL/FRAME:031014/0591

Effective date: 20130809

Owner name: BBN TECHNOLOGIES CORP., MASSACHUSETTS

Free format text: MERGER;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:031014/0597

Effective date: 20051122

Owner name: POWER MESH NETWORKS, LLC, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:STRAGENT, LLC;REEL/FRAME:031012/0575

Effective date: 20081010

AS Assignment

Owner name: RAYTHEON BBN TECHNOLOGIES CORP., MASSACHUSETTS

Free format text: NUNC PRO TUNC ASSIGNMENT WITH AN EFFECTIVE DATE OF OCTOBER 16, 2008;ASSIGNORS:ELLIOTT, BRIG BARNUM;PEARSON, DAVID SPENCER;SIGNING DATES FROM 20131011 TO 20131030;REEL/FRAME:031532/0774

Owner name: STRAGENT, LLC, TEXAS

Free format text: NUNC PRO TUNC ASSIGNMENT WITH AN EFFECTIVE DATE OF OCTOBER 17, 2008;ASSIGNOR:RAYTHEON BBN TECHNOLOGIES CORP.;REEL/FRAME:031532/0780

Effective date: 20131016

Owner name: POWER MESH NETWORKS, LLC, TEXAS

Free format text: NUNC PRO TUNC ASSIGNMENT WITH AN EFFECTIVE DATE OF OCTOBER 18, 2008;ASSIGNOR:STRAGENT, LLC;REEL/FRAME:031532/0783

Effective date: 20131031

AS Assignment

Owner name: III HOLDINGS 1, LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AZURE NETWORKS, LLC;REEL/FRAME:033167/0500

Effective date: 20140528

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION