Technical blog

You can find the original posts here

Zigbee Smart Energy solutions

posted Sep 27, 2008, 10:04 AM by Daniel Berenguer

After being a little rough with Zigbee in the past and complaining about the lack of products compatible with the Home Automation profile, I must admit that this standard has done a lot of progresses in the Energy Management area during the last year. Whilst the big companies don't dare to release Zigbee compatible devices for the Home Automation market, other smaller companies specialized in the Energy sector are surprising us with smart solutions for saving energy and controlling our bills. It's obvious that the offer of solutions for energy management applications has traditionally been much smaller than the portfolio of technologies commercialized around the Home Automation market. This fact translates into a less exposed market, less competitiveness and a lack of primitive prejudices. We also may take into account how vertical is the Energy sector compared with the sometimes ambiguity of the Home Automation environment. Even though, nobody will be shocked if we include the energy management as a part of the home automation ecosystem. But the reality is that a dozen of companies, sponsored by utility companies in some cases, have decided to release specific products for Energy Management applications. Most of these smart energy solutions have been certified by the Zigbee alliance and claim to be strictly compatible with the Zigbee Smart Energy profile, meaning that the products are interoperable between them.

Plug-in modules measuring the energy consumed by the appliance connected to the outlet, thermostats displaying temperature and energy costs, energy meters transmitting data to a central point, are some of the offers designed under the Zigbee logo. Around this range of products, utilities, web portals and end users are offered a way to control and monitor the energy consumption (and production, of course) and saving resources in the interest of a better sustainable development.

Link to the list of Zigbee Smart Energy Certified Products

RS485 vs CAN

posted Sep 8, 2008, 11:34 PM by Daniel Berenguer   [ updated Sep 8, 2008, 11:35 PM ]


This question usually arises during the initial evaluation of communication technologies in any project with distributed control requirements. This article pretends to describe the differences between these two technologies and finally offer a way of deciding which of them is the best for our application, bearing in mind the key requirements of our project: lead time, budget, platform and development skills.

RS485 was created in 1983 as a way of providing multidrop communication capabilities for devices with at least one UART. RS485 is often called the "multiplexed" RS232 - any of the devices connected to the bus can talk to any other in half-duplex mode. Half-duplex means that the communication is bidirectional but only one device can transmit at a time (one transmits, the other listen to it, and so on). But RS485 is not a protocol, it's just the physical path and the basic rules over which any device can communicate. RS485 provides a way of transmitting serial messages over the multidrop bus but the contents of these messages are totally user-defined. The structure of the communication frame, the time that a message must be transmitted at, the way of addressing devices on the bus, the method for avoiding data collisions, etc are some of the steps that a designer must cover when defining a protocol over RS485.

CAN (Controller Area Network) was created in the 80's by Robert Bosch GmbH and it initially targeted the automotive industry. CAN, in contrast to RS485 not only provides a physical media for communicating but also defines the necessary mechanics for addressing packets and avoiding data collisions. CAN specifies the structure of the data frame - the position and number of bytes for the address the data and the control bytes. Everything follows a precise structure and a timing in order to guarantee quality of transmission, delivery of every transmitted packet, speed and also avoid corruption of data. CAN is thus a very secure technology, and because of that it's currently been used in critical environments as vehicles, planes, vessels and the industry in general.

Implementing CAN from scratch is not necessary as there is nowadays an important amount of manufacturers selling CAN controllers and microcontrollers with the whole CAN stack included. As result, CAN is often preferred because it provides a simple way of designing true multimaster systems without having to define a protocol from scratch. On the other hand, RS485 is typically used in master-slave environments, where a data collision detector is not necessary. The cost in components is also lower in the RS485 case as most microcontrollers have an UART port, even the smallest ones. On the other hand, CAN usually needs more expensive microcontrollers with larger memory and an integrated CAN controller or a SPI port for driving an external CAN controller. This makes CAN a bit overkilled for small distributed sensors even if it can't be considered an expensive solution.

The following table tries to summarize the most common features of both technologies:

FeatureRS485CAN
Necessary microcontroller interfaceUARTCAN controller or SPI
Native system for detecting data collisionNo, it must be implemented in software if necessaryYes, CSMA/CD
Maximum communication speed10 Mbit/s1 Mbit/s
Maximum bus length1200 m (at 100 kbit/s)500 m (at 125 kbit/s)
Layers in the ISO modelPhysical layerPhysical layer and data link layer
Maximum amount of data into a single frameUnlimited, it depends on your application8 bytes
Component costsVery lowLow-medium
Development timemedium/highlow/medium
Typical useMaster-slave applicationsMultimaster applications
Examples of popular protocolsModbus RS485, ProfibusCanOpen, Devicenet, J1939


Does this mean that RS485 is only suitable for master-slave protocols? Not necessarily. A number of vendors have implemented their own proprietary multimaster protocols based on RS485. The way they detect collisions and ensure data integrity is not known but the solution itself is indeed possible. Some open multimaster protocols use control bytes for reserving/releasing the bus and even detecting collisions when at least two address fields get overlapped. Another possible solution is to use of some kind of synchrony between nodes.

Other sources of information:

http://www.can-cia.org
http://en.wikipedia.org/wiki/Controller_Area_Network
http://en.wikipedia.org/wiki/RS-485

Distributed control systems at home

posted Sep 8, 2008, 11:33 PM by Daniel Berenguer


Distributed control is a concept originated along the 70's to cover the needs of a growing industry in terms of automation. Before that, centralized automation systems, where a main computer did all the measurements and controls, were the only valid model at those times. The need of distributing the intelligence along an industrial plant and avoiding complex cable layouts made some companies think about a way of providing a communication method for their controllers. That was the genesis of the field buses. The overall solution was then progressively adopted by the industry so that nobody questions nowadays the advantages of the distributed control model.

Distributed control systems are typically found in the manufacturing industry and also in buildings, vehicles, planes and vessels. Nevertheless, is the distributed model really suitable for home automation applications? At the beginning of our current century Microsoft tried to impose a centralized model based on a PC running Windows. Microsoft knew that no distributed control system could compete against a PC in terms of flexibility, power and price. Moreover, most of the home automation manufacturers already provided drivers and applications for controlling their systems from a Windows machine. Other companies as HomeSeer Technologies or Home Automated Living have been providing very good tools for transforming a simple PC into a programmable home controller with lots of functionalities at very competitive prices. Under this panorama, the professional distributed systems found an important adversary in the home environment. Only Lonworks and EIB seemed to gain some part of the market to the PC-based system. Other distributed systems as CBUS or InstallBus also got some success but only in some localized markets. These distributed systems are commonly installed in large houses so that they provide important savings in cable runs. But the small and medium houses are still dominated by the centralized systems. Home automation devices rarely are as powerful as any industrial controller. Moreover, they usually emphasize on the low complexity of installation rather than on other aspects, most of them vital in other environment (speed of communication, robustness, programming capabilities, etc). Moreover, most pure home controllers (not building controllers) often delegate in the external PC the responsibility of taking the most complex decisions. As result, implementing a PC-based home automation network in a medium-size house or apartment is often less expensive than using distributed systems. As result, distributed systems are usually economically advantageous in large installations, where the costs of installing complex cable runs are not justified.

Ok, so could we state that PC-based central systems are less expensive than distributed systems for the home? But what about other aspects? Is then the PC-based model the best for home automation applications? This is also a very personal point of view, but I think that distributed systems are less error-prone, as every controller does a different job, in contrast to the PC-based system, where a computer maintains a number of tasks and the complexity of the system is sometimes exponentially proportional to the amount of endpoints to control. Hence the popular phrase: "when the computer gets halted, the whole installation gets useless".

Thus, must we live with that dilemma? low cost against robustness? As a home automation enthusiast and developer of embedded controllers I began thinking about a way of breaking that rule. First I thought that Linux and low cost 32-bit platforms could be a good starting point. Then I discovered some good open-source resources that perfectly complemented the idea that I had in mind. After more than two years of work the opnode project is taking shape...

The OPNODE concept

posted Sep 8, 2008, 11:22 PM by Daniel Berenguer   [ updated Sep 8, 2008, 11:32 PM ]


The OPNODE ("open node") project started some time ago when Alicia and me decided to return part of our experiences to the open community, having gathered lots of inputs from this community for years and having applied them into our professional projects. The building automation industry is something that I knew well and the home automation will always be one or my passions. On the other hand I've always considered the distributed systems a bit expensive for home applications. Thus, developing a set of low-cost open source controllers for building/home applications became the leitmotiv of this young project. The targets of this project have always been:

  • Define an automation system not only for the home but that could also be usable in buildings and the industry. Then, the system should be powerful and programmable.
  • The whole designs should be open source. I really believe in the open source philosophy, not only as a way of returning knowledge to the community but also as a business. Professional business I mean, not abusive business where a commercial company looks at the open source community only as a way of getting ideas and solutions. On the other hand, the open source formula allows the maintenance of big projects by independent enthusiasts, always following a collaborative criteria. I'll need some help on the development side soon and "open-sourcing" the project should open the door to anyone wanting to participate in the evolution of the project.
  • The controllers developed under this project should follow a distributed model. In other words:
  • Every controller should be capable of taking decisions by its own.
  • We need a multimaster communication technology in order to ensure the interoperability between devices. xAP not only covers this point but also allows the integration of any opnode in any existing xAP network, with the possibility of interacting with third-party devices. This protocol is in fact one of the key factors that make the opnode concept really open.
  • We should be able to define redundant tasks and secure procedures in order to guarantee the efficacy of our network.
  • Any opnode must be optimized in terms of power consumption. The choice of the hardware platform must be then justified in each case according to the role that the device will accomplish within the network.
  • The configuration and programming of any controller (or set of controllers) should be done via web. Besides, the web interface should provide a way of controlling/monitoring the endpoints managed by the device. This web interface will add some extra complexity to the developments but should finally provide a simple way of configuring and debugging our network from any location. The use of an external piece of software to program the system is then discarded.

The current status of the project, after the first two years of life, is not bad at all. Three high-configurable controllers, two of them focusing exclusively on control tasks and the third one targeting OEM applications:

opn-one

one-wire master with xAP and xPL interfaces.

opn-max

High-programmable xAP controller with Perl and PHP scripting.

opn-232

Ethernet-RS232 hardware platform aimed to integrate any RS232 device into the opnode (xAP) network. One available application at this moment: opn-x10.

Besides, a set of sensors and actuators is currently being developed so that the family is going to increase shortly.

The following schema shows the architecture of a typical distributed network following the opnode philosophy:





The above schema introduces some new concepts regarding the communication interfaces used in each case:

Orange network

This is the area where the field buses and low-end devices reside. The orange network is typically formed by low-power devices with no Ethernet interface. These devices communicate with a green-network node through a control-oriented technology.

Green network

Formed by high-end controllers communicating among them through xAP (or any other IP technology). Multimedia players, xAP controllers and gateways belong to this functional group.

Blue network

"Blue network" is a simple name given to the Internet connection of our system.

The architecture described in this article is not an invention of the opnode team. This architecture, where a TCP/IP network cohabits with control-oriented buses, is been widely used in the industry since long time ago. Control buses were specially conceived to transport control-oriented messages and the devices connected to these buses usually have lower requirements in terms of memory and power consumption. In summary, the orange network is totally oriented to the endpoint side. On the other hand, the TCP/IP network, much more capable in bandwidth but less oriented to control applications than field buses, is used as communication trunk and integration technology for all the high-end controllers.

I'm currently working some ideas about possible new opnodes. I still have to decide between a distributed music player system and a new multimaster control bus based on RS485. The important is not to stop I guess...

TCP/IP vs field buses for control applications. The eternal discussion

posted Sep 8, 2008, 11:17 PM by Daniel Berenguer


Why not use TCP/IP for control applications instead of those complicated field buses?. Indeed, this discussion is appearing every day in most environments with control needs. But which are the typical positions around this subject?

Software engineers typically defend TCP/IP as communication channel for control applications. Their main argument is that this communication channel can be found in most computers and appliances nowadays. Moreover, Ethernet and Wi-Fi hardware interfaces present low prices compared to some years ago.

TCP/IP is one of the few channels that allow combining big packets of data with short control-oriented messages. This feature makes this technology specially suitable for home applications where multimedia and control can share a common communication system. Besides, Ethernet-certified cables and RJ45 connectors are very cheap compared to some control-oriented cabling systems. Most buildings already contain a LAN infrastructure with the necessary hardware and connecting IP control systems to those infrastructures is as easy as installing a new computer in the LAN.

On the other hand, control engineers don't usually like the idea of relaying on the TCP/IP technology for controlling critical applications. Furthermore, Ethernet follows a physical star topology and this complicates the installation process in networks with big amounts of endpoints. On the other hand, Wi-Fi is often avoided in industrial environments due to the electromagnetic noise potentially produced by this technology. Control-specific technologies usually follow a bus topology, reducing cable runs and providing a separate communication channel for the control messaging. Control interfaces as CAN, RS485, Lonworks, LIN and others can be used with low-power microcontrollers as the communication protocol is often easier to maintain than the TCP/IP stack. These simple controllers participate in the bus as listeners, transmitters or both but will never have to worry about a possible overhead of information coming from a computer or a media server.

Nevertheless, integrators know that TCP/IP is an excellent complement to the industrial control solutions. Ethernet is still the natural way of connecting a PC to a network and the inclusion of the web technology in remote monitoring applications force us to mix somehow the best of both worlds. Multimedia, temperature sensing, web access, binary control, SCADAs, ... when all these applications have to be integrated into a single solution, then the use of a hybrid network is often the best solution.

The following schema, extracted from the opnode project, is an example of integration of TCP/IP and several control-oriented technologies:



As you can see, TCP/IP or "the green network" is used in the above example to transport multimedia data and also as integration point between different control technologies. The link between every control technology and the IP world is relayed on a high performance gateway that translates and filters the commands coming from both channels, avoiding then the overhead of data on the control side and reducing the amount of short commands on the IP LAN.

This architecture is being widely used in industrial and building applications. No matter which control bus is installed, it will be connected to the LAN infrastructure in some point.

After this explanation, a new term comes into focus: the cost per endpoint. This parameter gives an idea about the cost (price and power consumption) of controlling a single endpoint using a given technology or mix of technologies. IP controllers are more expensive in price and consumption. Thus, the only way of reducing the cost per endpoint is to add more control points to the device. This is the reason because most IP controllers are designed to control important amounts of endpoints whilst other less expensive control-oriented technologies provide devices controlling just the temperature of one room.

Zigbee: another "unstandarizable" standard?

posted Sep 8, 2008, 11:13 PM by Daniel Berenguer


It seems that Zigbee is finally following the steps of other "promising" standards. Started in 2003, Zigbee appeared as the solution for all our frustrations in terms of interoperable wireless control. Most important manufacturers were there, supporting the emergent standard and publishing imminent dates for the release of new products. Any Zigbee device from any manufacturer was going to provide total interoperability with any Zigbee compliant product. From light control systems to thermostats and even remote controls for the multimedia equipment, this standard made us think that the total solution for controlling devices wirelessly was just arriving. The massive production of Zigbee compliant devices was initially announced for 2004, then 2005, ... Now, three years after the rings and bells, Zigbee is still in the oven.

This article doesn't pretend to question Zigbee itself as technology. Indeed, Zigbee was started on a solid technological base. Based on a well defined low-level standard as IEEE 802.15.4 and providing support for different frequency bands, a great amount of IC manufacturers soon released Zigbee compatible interfaces and OEM modules. The price of the new RF controllers corroborated the promise of producing really low-cost devices and the presentation of a couple of prototypes in some international exhibitions removed the doubts of some of the most critic technologists. Moreover, the wish of the Zigbee organization has always been to provide reliable-interoperable low-cost devices.

Thus, what happens with these international standards? Why is moving these things ahead so hard? This is the actual subject of this article.

Communication standards are often created as a way of sharing costs and resources among the promoters. Besides, a company wanting to develop a product under a certain standard will find a communication protocol already defined and even the availability of well-tested platforms where to start developing from. But the companies creators of the standard always assume the extra work and costs of participating in the definition of the new technology. As result, these companies usually try to impose their decisions, all them based on their own commercial and technical interests. When a committee is formed by dozens of members, each one with its own market and a precedent technological basis, the negotiation process becomes complicated. Mainly when the members are big companies that don't worry about the costs of delaying the release of the new technology up to the infinite.

In contrast to these open standards, other "de facto" standards leaded by a single company producing a one-chip solution get sometimes better results. This is the case of Lonworks, a technology created and promoted by Echelon. But I don't mean that "democratic" open standards have a worse future than proprietary ones. Some open initiatives as CanOpen, Devicenet, EIB, BacNet, etc. are example of collaboration among companies and academic institutions. The secret of the success is maybe in understanding that interoperability is something positive for the market.

Balanced designs - choosing the correct technology for a new product

posted Sep 8, 2008, 11:08 PM by Daniel Berenguer


"Balanced design" is a concept that I often use in my reports during the initial definition of architectures for embedded controllers. It's a term that I invented to describe the proportion between device functionality and platform power. Device functionality is the set of functional features provided by the device (ex: amount of control points, graphical interface, communication channels, processing speed, etc.). Platform power gives an idea about the amount of technological resources invested in the controller, most of them hardware capabilities but others related to the operational system (software) itself. Note that this term is only used by me into my projects, mainly embedded controllers and gateways but I guess that it could also be applied to any technological solution in the market. On the other hand, the concept only tries to serve as a parameter when comparing and choosing technical solutions. It should never be used as a way of evaluating the commercial aspects of a product. Unfortunately, technical excellence doesn't always translates into commercial success.

Thus, when could a device be considered technologically balanced? Rating the "balance" of an electronic product depends on the following points:

  • Functionality provided by the device
  • Speed of processing
  • Immunity to faults
  • Configuration and programming capabilities
  • User interfaces
  • Communication channels

In other words, prior to quantify the balance of a product, the product itself must comply with its specifications and provide the functionality that is supposed to have. After that, the platform power is then evaluated. The optimum balance of a device is reached when the chosen platform fits but doesn't exceed the needs in terms of computing. Thus, a product could not be well-balanced at the beginning of its life but gain in functionality and then improving the technical balance progressively with the release of new versions. On the other hand, a device with an overkilled hardware platform and an unnecessarily costly OS will always present a low technological balance.

But which is the interest of providing a good technological balance? I summarize the most interesting points of following this philosophy:

  • Avoid unnecessary power consumption
  • Simplify the maintenance of the product reducing the amount of components
  • Provide an optimum embedded solution
  • Show the appearance of a compact product
  • Avoid unnecessary noise and production of interferences
  • Reduce the overall costs of the product

1-7 of 7