Reflections on the Smart Systems Developer Manifesto
A few days ago I read an excellent article "Smart Systems Developer Manifesto: 15 Principles" 3r3-320.
I decided to share my thoughts about the layer below, namely the basic principles of the architecture, which would basically correspond to the proposed principles.
Due to the nature of the post, it will be even more subjective than the Manifesto.
First, let's deal with several terms, for example, the developer and user of smart systems. Who is this and where is the separation?
There are 2 obvious extremes: the manufacturer of the smart switch purchased and my wife, who turns on the light. But who will take me or my son, who occasionally take and pull out ESP3? to solder a couple of sensors, buttons and other LED strips, which then also interact with the switch?
But even if we turn our attention to the vendor, it is also not entirely clear. I will explain. The obvious extreme: all the devices of one manufacturer, the hub is also his, the application in his smartphones - he is the developer of a smart system. And what if several vendors in the same network? And what if the central hub of one, the device of the other, and the cloud, with which I interact with, say, Siri, does Apple? Which of them is addressed to the Manifesto, which of them is the same developer? I think that going down a little below the level of global abstractions of the Manifesto, which I, by the way, almost completely share, should introduce a deeper functional separation, otherwise collective responsibility for user experience will result in either collective irresponsibility, which we all now observe to one degree or another. or in rigid enclosing of ecosystems with user-level integration: which of you has more than one application for managing a smart home on a phone?
I propose the following separation of objects: end devices, hubs, gateways, data clouds, integration services, user interfaces.
The subjects (as roles), for example, users, tuners, developers, manufacturers or suppliers exist in the context of these objects.
Together they create and represent a system that provides its functions through data exchange.
A little more detail in this post, I will focus only on objects.
And here lies a rather funny question. What is the end device, that is, the very “thing” of the Internet of things?
With a smart vacuum cleaner, everything is clear: they took out their base from the box and plugged it into the outlet, he got angry, went and made a superposition of white carpets with the free creativity of your pets and went on gymnastics. And now, with a light bulb, oddly enough, everything is not so simple.
So now I look at my chandelier with several lamps, which I turn on using 3 different switches (separately, and not as activation of nuclear warheads), in fact, a dimmer works on a DIN rail in a shield. Here, the “thing” would seem to mean the final actuator, but the funny thing is that this multi-channel dimmer and I don’t even remember which of the other chandeliers also controls, so here it is a device, but not all, but only a part. But at the same time, the phrase of the wife "turn on the light" is intuitive.
I advise readers to find the “thing” in another bundle: a room controller (thermostat) that controls the radiator heads (heating) through 1 channel of 8 channel PWM, and cooled through 1 channel 4 of channel 0-10V actuator, which already sets the setting for the constant dampers air flow in the duct.
I had the idea of hitting a cruel office and introducing a precise definition of "things" in this context, but let's talk about 3r3179. end devices
in the sense of their
and how many of them and how they interact will be left out of the brackets, as long as it is possible.
Then the intuitive “make it warmer” or “turn on the economy mode when we leave” is fairly obvious.
In view of the previous thinking about what things are, hubs are in fact a factory of even more virtual things. That hub can create a thermostat, when there are already temperature sensors and heads on the radiators. He can create a device "all world", which can be turned off. It's funny, but a hub can be absolutely virtual, for example, in the EIB or KNX, the basic concept is a group address: the sensor sends data to it, and the actuator takes one or more group addresses for each of its functions. Thus, if at the exit from the apartment you have a switch that sends 0 in some 1/1/1 state, and in each of the actuators responsible for the light it is present (along with more individual, say, 1/0 /1? 1/0/12 and tp) you will have such a device "turn off all the lights" without additional physical devices.
In this example, the hub is the network itself, in other cases the hub often exists in the physical world as a separate physical device, but you can also give another good example of a “not very physical” hub - this is Node-RED.
Please pay separate attention that I deliberately do not say how data streams from already existing devices fall into this very hub and how it flows out to other parts of the system. For this function, other objects of the system are responsible - gateways.
It just so happened that in the last 40-50 years, a lot of networks were created with different topologies and abstraction levels (with their own protocols), which one way or another can be part of the Internet of things. In order to integrate 2 networks, some traffic exchange node is created; if such a node wraps all the data of one protocol into another, then it is accepted to call it a tunnel, since “from the other side” you can completely get the entire stream to work with it as a local one, if abstractions, then such a node is called the gateway.
Since we are in this post fairly free to use terminology, let's use the term 3r31717. Gateway 3r3180. and spend a little more time analyzing where from and where the actual devices actually lock. All who worked with the process control system sooner or later expanded the "central" networks with additional ones, for example, a lot of Modbus meters were connected to Profibus.
This is all a pretty spent case and you can not stop too much, but where does Xiaomi Mijia Multifunctional Gateway send to? I would like to say that from ZigBee to Wi-Fi, but this is not quite true. Yes, on one side of the gateway there is a ZigBee network, but, even if you connect a third-party device, you cannot reach it through this gateway. On the other hand, the gateway has Wi-Fi, but the gateway not only provides communication over the local network using a protocol (which hackers call the miIO protocol), but also with the Xiaomi cloud, which provides the Mi Home application when you leave the local area network. Samsung SmartThings is another very interesting gateway, but there are difficulties here.
If earlier there was a question why Groovy with all its diversity, then now I would call the tendency to cloud cover difficulty.
I will explain my position, with the hope that I am mistaken. Creating a new device that is compatible with the Samsung SmartThings ecosystem, you can choose 2 options: connect to the hub, or directly to the cloud, if you want to create automation, the one that we have above called the device generated by the hub, then it moved to the cloud. Point. That is, there is a violation of the Manifesto. You do not turn on the light in the toilet, if the Internet does not work for you, either through the application (apparently, there are no more local expectations, based on the diagram in article 3r3119. IoT and Tizen IoT
), Nor locally from the motion sensor of another network, After all, you need to integrate the device, not the network - otherwise through the cloud.
Those who managed to work with Tizen IoT, correct me, please.
The situation is similar with Logitech Harmony, which broke the local automation after the update.
If we discard the gateways of the "automation network - automation network", then the most important in the gateway operation is the target abstraction into which it translates the representation of devices from the core network, and this is already defined by data clouds.
Clouds of data
This is the most obvious and unobvious object of the system at the same time. Its functions and the need for it are obvious, but how exactly this component is implemented is least obvious and does not depend on the desires of the end user.
Therefore, I would rather fill in this part of my personal desires and reflections.
Well, when there is a clear API and simple, even better, when this API is open and just connect to it.
Here I make a reservation about simplicity. Simplicity, if not to talk about mathematics, is subjective in principle. My personal opinion is based on my experience and my desires. Of course, for a company that is going to release some product in the amount of several hundred thousand simplicity is quite different.
I want me to be able to weave the results of my hobby into the world around me. What is required for this?
The main limiting resource is time, the second is knowledge, the third is money. If I cannot make a device that somehow works through the cloud, in one day, then it will be done “later”, as long as it is synonymous with “meibi chumoru” and “manyana”. Now, if it works somehow, then after a couple of days off, it will probably work as it should. IoT is interdisciplinary in nature, so here you need to raise some kind of OAuth2 server, add certificates to it, implement an API, and this is all done to make my little homemade microcontroller with a voice assistant, which I will write about in the next part.
Previous thoughts were more about "How?", But no less important problem (this is not a challenge, but the problem) lies in "What?".
I would divide all major data clouds that can be used for IoT into two classes: data points and capabilities.
3r3185. Cloud data points 3r3186. . This is either a parallel evolution with the world of SCADA, or a direct descendant.
Here you have the temperature sensor readings - well, let's write it somewhere in the cloud, where the one who needs to read it. Temperature sensor on the battery - it does not matter, see above. There are whole classes of clouds that allow you to do this. They can be easily recognized if
MQTT protocol (but maybe CoAP and STOMP). A wonderful thing - I use it myself and not only in IoT, while calling it "the triumph of freedom over common sense." The protocols are just so flexible that everyone solves the same problem in their own way.
3r3185. Data cloud in the form of capabilities
. I admit, 8-9 years ago, when I wrote the next version of my home platform, I wanted to take and classify objects in the system. So that the system understands that this is a light bulb, and this is a switch, and this is a valve. Obvious first integration: a list of types (and in fact classes, because the PLO is the same). Then a switch appears, but on the battery and the power of the OOP in action - so we inherited and got a new one. And then it turned out that the battery can be anything. Then I had to cut it not into a tree of device classes, but divided into “possibilities” of devices, combining which we get a switch.
This is how Apple HomeKit, Alexa, Google and other cloud ecosystems of smart homes work. It would seem that it is happiness.
But, as I said above, I used this approach 8-9 years ago. And I identified these very features myself, I wanted to add an IP camera or an Asterisk PBX? Super. Finished and works.
But here's the misfortune, the above-listed cloud ecosystems of smart homes are in fact not ecosystems of IoT, but ecosystems of voice assistants, whose capabilities should be universal for the whole world and for all devices. Adding new features is a process that is significantly different from my "Added and Works." We all remember how at the dawn of these ecosystems it was necessary to “turn on the gate”. The situation is similar with SmartThings.
No manufacturer can provide a clear, exhaustive list of capabilities for devices that can be released and become part of the IoT ecosystem. That is why there are no opportunities for such systems as the percentage of visceral fat, blood sugar or acetone do you know what? Why the house can not support you with joyful illuminations, when everything is good or to draw your attention, when something went wrong?
On the other hand, how to create user interfaces without understanding what a device can do? The SCADA data point approach was convenient to hide topological and protocol features. Get data (horizontal list) with some additional properties, for example, reliability (whether there was a disconnection along the way) or access levels, but their basic representation was in the form of mnemonic diagrams.
IoT users live in the Post-PC era and mnemonic schemes are inconvenient on the screens of phones, and their configuration is inexcusably long.
I think that there should be a certain hybridization. First, the system must know what the device is, and it must have data points. However, no less important is that there should be additional meta information that should be delivered to a specialized cloud, for example, a favorite voice assistant. Such information, in particular, may include the profile (set of capabilities) of the device, in understanding the external cloud, and its identifiers.
The function of the device manufacturer will be the description, including the desired representations for this product, both for visual interfaces and for other types: voice interaction, AR /VR, and the like. But, even if the manufacturer did not do this, then, reluctantly and overwhelmingly lazy, the user can still choose that this Google device can now be known as “Smart Home Kettle” and it now has the following features: TemperatureControl, OnOff, Modes and Toggles. Yes, I dropped action.devices.traits for compactness.
Here integration services should already provide interaction.
This is the same gateway, but between the clouds. Some abstractions must be replaced by others.
The basis of interaction is an event. There are both query models and publication. The topic is so extensive that you can fall into the Bollywood trap when describing it. There, as you know, more films are produced per year than a person can physically watch, 3r-3266.
so by the end of the article I will be even further from reality than at the beginning of the description.
I have already mentioned many consumer systems, you can remind about LoRaWAN (and TheThingsNetwork), IFTTT, OpenStreetMap, AWS IoT, Azure IoT, weather services, yes, in fact, almost any Internet service or Intranet (is there any other term?), If these devices should get into corporate systems.
I will not describe this part in depth, but I want to complain about, unfairly in my opinion, walked around with IoT home systems, desktops. Only in Mojave HomeKit finally came out - for me this is bewildering. How is a teapot more complicated than spreadsheets in which I can count Cash Flow of some enterprise? After all, I can work with Numbers in the browser, but I don’t have to switch off a forgotten iron, as understood by Apple. In short, give PWA!
User interfaces are primarily convenient physical switches, but they are unforgivably small.
AR, VR, Mixed Reality, voice assistants, smart TVs, applications for smartphones and tablets, EEG neural interfaces are the cherries on the cake with which we, geeks, are very cool to play.
It would seem, what does Docker and microservices have to do with it? If readers are interested, I’m happy to share my thoughts and best practices in the implementation of the IoT system architecture based on this classification of objects.
I use it myself.
It may be interesting