You can find the original posts here |
Technical blog
Zigbee Smart Energy solutions
After being a little rough with Zigbee in the past and complaining
about the lack of products compatible with the Home Automation profile,
I must admit that this standard has done a lot of progresses in the
Energy Management area during the last year. Whilst the big companies
don't dare to release Zigbee compatible devices for the Home Automation
market, other smaller companies specialized in the Energy sector are
surprising us with smart solutions for saving energy and controlling
our bills. It's obvious that the offer of solutions for energy
management applications has traditionally been much smaller than the
portfolio of technologies commercialized around the Home Automation
market. This fact translates into a less exposed market, less
competitiveness and a lack of primitive prejudices. We also may take
into account how vertical is the Energy sector compared with the
sometimes ambiguity of the Home Automation environment. Even though,
nobody will be shocked if we include the energy management as a part of
the home automation ecosystem. But the reality is that a dozen of
companies, sponsored by utility companies in some cases, have decided
to release specific products for Energy Management applications. Most
of these smart energy solutions have been certified by the Zigbee
alliance and claim to be strictly compatible with the Zigbee Smart
Energy profile, meaning that the products are interoperable between
them. Plug-in modules measuring the energy consumed by the appliance connected to the outlet, thermostats displaying temperature and energy costs, energy meters transmitting data to a central point, are some of the offers designed under the Zigbee logo. Around this range of products, utilities, web portals and end users are offered a way to control and monitor the energy consumption (and production, of course) and saving resources in the interest of a better sustainable development. Link to the list of Zigbee Smart Energy Certified Products |
RS485 vs CAN
This question usually arises during the initial evaluation of communication technologies in any project with distributed control requirements. This article pretends to describe the differences between these two technologies and finally offer a way of deciding which of them is the best for our application, bearing in mind the key requirements of our project: lead time, budget, platform and development skills. RS485 was created in 1983 as a way of providing multidrop communication capabilities for devices with at least one UART. RS485 is often called the "multiplexed" RS232 - any of the devices connected to the bus can talk to any other in half-duplex mode. Half-duplex means that the communication is bidirectional but only one device can transmit at a time (one transmits, the other listen to it, and so on). But RS485 is not a protocol, it's just the physical path and the basic rules over which any device can communicate. RS485 provides a way of transmitting serial messages over the multidrop bus but the contents of these messages are totally user-defined. The structure of the communication frame, the time that a message must be transmitted at, the way of addressing devices on the bus, the method for avoiding data collisions, etc are some of the steps that a designer must cover when defining a protocol over RS485. CAN (Controller Area Network) was created in the 80's by Robert Bosch GmbH and it initially targeted the automotive industry. CAN, in contrast to RS485 not only provides a physical media for communicating but also defines the necessary mechanics for addressing packets and avoiding data collisions. CAN specifies the structure of the data frame - the position and number of bytes for the address the data and the control bytes. Everything follows a precise structure and a timing in order to guarantee quality of transmission, delivery of every transmitted packet, speed and also avoid corruption of data. CAN is thus a very secure technology, and because of that it's currently been used in critical environments as vehicles, planes, vessels and the industry in general. Implementing CAN from scratch is not necessary as there is nowadays an important amount of manufacturers selling CAN controllers and microcontrollers with the whole CAN stack included. As result, CAN is often preferred because it provides a simple way of designing true multimaster systems without having to define a protocol from scratch. On the other hand, RS485 is typically used in master-slave environments, where a data collision detector is not necessary. The cost in components is also lower in the RS485 case as most microcontrollers have an UART port, even the smallest ones. On the other hand, CAN usually needs more expensive microcontrollers with larger memory and an integrated CAN controller or a SPI port for driving an external CAN controller. This makes CAN a bit overkilled for small distributed sensors even if it can't be considered an expensive solution. The following table tries to summarize the most common features of both technologies:
Does this mean that RS485 is only suitable for master-slave protocols? Not necessarily. A number of vendors have implemented their own proprietary multimaster protocols based on RS485. The way they detect collisions and ensure data integrity is not known but the solution itself is indeed possible. Some open multimaster protocols use control bytes for reserving/releasing the bus and even detecting collisions when at least two address fields get overlapped. Another possible solution is to use of some kind of synchrony between nodes. Other sources of information: http://www.can-cia.org http://en.wikipedia.org/wiki/Controller_Area_Network http://en.wikipedia.org/wiki/RS-485 |
Distributed control systems at home
Distributed control is a concept originated along the 70's to cover the needs of a growing industry in terms of automation. Before that, centralized automation systems, where a main computer did all the measurements and controls, were the only valid model at those times. The need of distributing the intelligence along an industrial plant and avoiding complex cable layouts made some companies think about a way of providing a communication method for their controllers. That was the genesis of the field buses. The overall solution was then progressively adopted by the industry so that nobody questions nowadays the advantages of the distributed control model. Distributed control systems are typically found in the manufacturing industry and also in buildings, vehicles, planes and vessels. Nevertheless, is the distributed model really suitable for home automation applications? At the beginning of our current century Microsoft tried to impose a centralized model based on a PC running Windows. Microsoft knew that no distributed control system could compete against a PC in terms of flexibility, power and price. Moreover, most of the home automation manufacturers already provided drivers and applications for controlling their systems from a Windows machine. Other companies as HomeSeer Technologies or Home Automated Living have been providing very good tools for transforming a simple PC into a programmable home controller with lots of functionalities at very competitive prices. Under this panorama, the professional distributed systems found an important adversary in the home environment. Only Lonworks and EIB seemed to gain some part of the market to the PC-based system. Other distributed systems as CBUS or InstallBus also got some success but only in some localized markets. These distributed systems are commonly installed in large houses so that they provide important savings in cable runs. But the small and medium houses are still dominated by the centralized systems. Home automation devices rarely are as powerful as any industrial controller. Moreover, they usually emphasize on the low complexity of installation rather than on other aspects, most of them vital in other environment (speed of communication, robustness, programming capabilities, etc). Moreover, most pure home controllers (not building controllers) often delegate in the external PC the responsibility of taking the most complex decisions. As result, implementing a PC-based home automation network in a medium-size house or apartment is often less expensive than using distributed systems. As result, distributed systems are usually economically advantageous in large installations, where the costs of installing complex cable runs are not justified. Ok, so could we state that PC-based central systems are less expensive than distributed systems for the home? But what about other aspects? Is then the PC-based model the best for home automation applications? This is also a very personal point of view, but I think that distributed systems are less error-prone, as every controller does a different job, in contrast to the PC-based system, where a computer maintains a number of tasks and the complexity of the system is sometimes exponentially proportional to the amount of endpoints to control. Hence the popular phrase: "when the computer gets halted, the whole installation gets useless". Thus, must we live with that dilemma? low cost against robustness? As a home automation enthusiast and developer of embedded controllers I began thinking about a way of breaking that rule. First I thought that Linux and low cost 32-bit platforms could be a good starting point. Then I discovered some good open-source resources that perfectly complemented the idea that I had in mind. After more than two years of work the opnode project is taking shape... |
The OPNODE concept
TCP/IP vs field buses for control applications. The eternal discussion
Zigbee: another "unstandarizable" standard?
It seems that Zigbee is finally following the steps of other "promising" standards. Started in 2003, Zigbee appeared as the solution for all our frustrations in terms of interoperable wireless control. Most important manufacturers were there, supporting the emergent standard and publishing imminent dates for the release of new products. Any Zigbee device from any manufacturer was going to provide total interoperability with any Zigbee compliant product. From light control systems to thermostats and even remote controls for the multimedia equipment, this standard made us think that the total solution for controlling devices wirelessly was just arriving. The massive production of Zigbee compliant devices was initially announced for 2004, then 2005, ... Now, three years after the rings and bells, Zigbee is still in the oven. This article doesn't pretend to question Zigbee itself as technology. Indeed, Zigbee was started on a solid technological base. Based on a well defined low-level standard as IEEE 802.15.4 and providing support for different frequency bands, a great amount of IC manufacturers soon released Zigbee compatible interfaces and OEM modules. The price of the new RF controllers corroborated the promise of producing really low-cost devices and the presentation of a couple of prototypes in some international exhibitions removed the doubts of some of the most critic technologists. Moreover, the wish of the Zigbee organization has always been to provide reliable-interoperable low-cost devices. Thus, what happens with these international standards? Why is moving these things ahead so hard? This is the actual subject of this article. Communication standards are often created as a way of sharing costs and resources among the promoters. Besides, a company wanting to develop a product under a certain standard will find a communication protocol already defined and even the availability of well-tested platforms where to start developing from. But the companies creators of the standard always assume the extra work and costs of participating in the definition of the new technology. As result, these companies usually try to impose their decisions, all them based on their own commercial and technical interests. When a committee is formed by dozens of members, each one with its own market and a precedent technological basis, the negotiation process becomes complicated. Mainly when the members are big companies that don't worry about the costs of delaying the release of the new technology up to the infinite. In contrast to these open standards, other "de facto" standards leaded by a single company producing a one-chip solution get sometimes better results. This is the case of Lonworks, a technology created and promoted by Echelon. But I don't mean that "democratic" open standards have a worse future than proprietary ones. Some open initiatives as CanOpen, Devicenet, EIB, BacNet, etc. are example of collaboration among companies and academic institutions. The secret of the success is maybe in understanding that interoperability is something positive for the market. |
Balanced designs - choosing the correct technology for a new product
"Balanced design" is a concept that I often use in my reports during the initial definition of architectures for embedded controllers. It's a term that I invented to describe the proportion between device functionality and platform power. Device functionality is the set of functional features provided by the device (ex: amount of control points, graphical interface, communication channels, processing speed, etc.). Platform power gives an idea about the amount of technological resources invested in the controller, most of them hardware capabilities but others related to the operational system (software) itself. Note that this term is only used by me into my projects, mainly embedded controllers and gateways but I guess that it could also be applied to any technological solution in the market. On the other hand, the concept only tries to serve as a parameter when comparing and choosing technical solutions. It should never be used as a way of evaluating the commercial aspects of a product. Unfortunately, technical excellence doesn't always translates into commercial success. Thus, when could a device be considered technologically balanced? Rating the "balance" of an electronic product depends on the following points:
In other words, prior to quantify the balance of a product, the product itself must comply with its specifications and provide the functionality that is supposed to have. After that, the platform power is then evaluated. The optimum balance of a device is reached when the chosen platform fits but doesn't exceed the needs in terms of computing. Thus, a product could not be well-balanced at the beginning of its life but gain in functionality and then improving the technical balance progressively with the release of new versions. On the other hand, a device with an overkilled hardware platform and an unnecessarily costly OS will always present a low technological balance. But which is the interest of providing a good technological balance? I summarize the most interesting points of following this philosophy:
|
1-7 of 7