Approaches to Home Automation Protocols and Implications for the Custom Install Channel
Charmed Quark principal Dean Roddey presents the elements of a home automation protocol, the opportunities and pitfalls for any given approach, and the implications for the professional integration channel.
Home automation protocols are sort of the plumbing of the automation world: Everyone who uses a smart home system is making use of one or more control protocols, but as with plumbing we rarely think about them until something starts to really stink.
Control protocols are how automation systems talk to the devices they control, at a minimum defining the format of messages sent back and forth between the automation system and the device. The quality of those protocols plays a substantial role in the quality of automation solutions and end user experiences, no matter how much work is done to keep them hidden away under the floorboards.
Recently this uber-geeky realm has become a battle ground where some large players are attempting to put forward their own versions of the “one protocol to rule them all,” and hence to gain leverage within (or even dominate) the automation world. Apple’s HomeKit and Nest’s (aka Google’s) Thread initiatives are the most visible examples of this conflict. There are many fan boys of either who seem to think that the entire professional automation world is about to be made irrelevant by these efforts.
Leaving aside whose story is best and the politics thereof, in this article I wanted to just provide more information about control protocols in general, what types of problems they cause, how these problems limit the automation world, and so forth.
The biggest area of confusion that I see among the laity concerning device control is the issue of syntax vs. semantics. And this division is very important to the rest of the discussion, so I’d like to cover this in some detail.
In the most general sense, syntax is related to the actual structure of communications, and semantics is related to the meaning being conveyed. For instance, syntax is how these two messages differ from each other.
POWER ZONE1 ON
0x1 0x20 0xFF
These are two different ways that an A/V receiver might expect an automation system to send it a command to power on zone 1. They say exactly the same thing, only the syntax is different, not unlike saying the same thing in English vs. Latin.
In one, it’s a text-based system and in the other it’s a binary system where the “words” are just numbers. And the order of subject and verb can change, just as it does in English vs. Latin. The first is verb, subject, and new power state. The second is subject, verb, and new power state.
One of the benefits claimed for any sort of ubiquitous control protocol is that all devices would use the same syntax. And that is certainly true. There would in fact be a fair amount of benefit to be gained by all devices using the same syntax. But syntax has never been what has held the automation world back, despite what many people think.
Yes, it can be annoying sometimes when a device uses some very ad hoc or unusual syntax that is difficult to deal with. But, for the most part, this is just an annoyance, not a real limitation. All automation systems easily hide these syntactical differences under the hood. It would cut down on some amount of time-sucking grunt work if every device used the same syntax, but it would not in any way take us to the land of milk and smart homes.
Ultimately semantics is the real bane of automation systems when it comes to providing at least high quality, and hopefully very smart, helpful, and self-aware automation solutions.
Semantics works at the next level up from syntax and answers questions like: What is a thermostat? What is a scene? What are the possible states of a security zone and their significance? What is the range of a volume control? What makes up current weather conditions vs. weather forecast data?
An automation system can choose to be semantically agnostic and take no position on any semantic issues, essentially just exposing the bespoke details of every device. In such a system, integrators must define all of the semantics themselves in each installation they do.
A particular device may report states A, B, and C for a security zone. It’s up to the integrator to decide what that means, how to react to those states, how they might be exposed to the end user, and how to deal with the differences if you swap out the security device at some point in the future.
There’s nothing necessarily wrong with such an approach per se, and it provides maximum flexibility. However, it also means that every solution is pretty much a complete-from-scratch undertaking and any change in hardware might require extensive adjustments.
If the goal is to create smarter homes—and make the configuration of systems much more efficient (hence less labor-intensive for the integrator and less costly for the customer)—then it behooves an automation system to provide a broad range of very strong semantic definitions and force all devices under its control to fit into those semantic definitions. Once that is done many benefits accrue:
- You can swap out device A for device B and nothing has to change, or at worst the impact will be vastly reduced.
- You can create generic logic that can be reused over many installations.
- You can create generic graphical interfaces that can be more easily reused.
- Third parties can create highly refined or complex logical or graphical components that you yourself might never have the time to create, and which can actually be adopted into your own solutions with minimal effort.
- The system itself can be much more self-aware because it understands the meaning of the states of devices.
- The system can more easily self-configure, because it understands types of devices and how to integrate the functionality they provide, at least for the core device types.
The benefits are clearly quite substantial. Ultimately anything that allows the custom installer to more quickly create components that are both value added (not off-the-shelf and available to everyone) but also reusable across multiple installations and hardware, is a very good thing.
And here I am talking about not just standalone screens of the “virtual remote control” variety, but tightly integrated, activity-oriented reusable components that coordinate multiple devices.
When Semantics Don’t Mesh
Here’s the rub: As soon as any automation vendor (or third-party protocol creator) attempts to come up with a nice set of semantic definitions for thermostats, security systems, A/V receivers, sensors, irrigation systems, etc., they quickly discover that the various extant examples of any given type of device are so varied, and so inconsistently implemented, that they are forced into one of three options.
One of those options – quite popular with those who think their solutions will dominate the smart home—is to take a hard stand and say: If you want to be part of our ecosystem, you have to abide by some minimum set of semantic definitions. Any devices that cannot meet these minimum requirements cannot be supported, or at least not within this semantic framework (i.e., they can dial into the ecosystem, but they’re on their own).
This sounds like a sane approach, but a lot of devices may fall under the line, and none of those devices gets the auto-magical treatment. The customers who have those devices won’t be happy and the vendor’s claims for breadth of hardware support are artificially limited.
Another option is to set the bar very low – for example, defining “on” and “off” for light switches, but not “dim.” That can be done, and it lets a lot of devices into the party that might not otherwise have been invited. But ultimately you will have done a lot of work for limited benefit. Every drop in the bar effectively means that fewer functions can be depended on when creating reusable logic or UI components. So a large price is paid in the ultimate effectiveness of your efforts.
A third option (which has been used in some previous efforts, such as UPnP) is to go with a very flexible, dynamic definition., i.e., provide semantic definitions, but make parts of it optional, or allow it to have optional forms. What features are supported or the form in which they are exposed must be determined dynamically and adapted to. So, for instance, if volume control is supported it may be in terms of some arbitrary decibel range, or it might be a standardized percentage value.
A dynamic system might sound like the best of both worlds, but ultimately it’s probably the worst. It has the appearance of a standard, but it’s really a non-standard. If the ultimate point of semantic definitions is to allow for the creation of reusable components, then it at least partially fails, because in reality no one is going to get all of the ifs, ands and buts correct; they won’t have the gear or time to test against all of the combinatorial possibilities.
It also requires a very dynamic approach. This is perhaps fine for a standard application, which can via extensive logic adapt itself on the fly to what is available, although usually at the cost of a highly customized layout. For integrators wishing to create reusable content, within the context of the tools provided by most automation systems, that kind of dynamic approach could be quite tedious if not impractical.
NewsCEDIA Expo 2019 to Debut ‘Shark Tank’ Style TechStarter FIVE Program
Deadline Extended for 2019 CE Pro BEST Product Awards Entries
2019 CE Pro BEST Project Awards Deadline Extended to July 26
People & Places: CEDIA Names Burkhardt Fellow; Torus adds Collins; AVB
Why Dennis Erskine Turned to Torus to Power the CEDIA HQ Reference Home Theater
View more News