In the previous section, we learned how to normalize IoT data by using simple message wrapping. An event-driven architecture requires one simple principle: the output of every functional unit must be an action for event. Events can trigger new commands. However, this causality must never be broken. A command (that trigger an event) cannot create another command, just as an event cannot spawn another event.
For event-driven architectures, this means the creation of a command. In most instances, the device readings will result in a UpdateReadings or UpdateHealth command. Of course, if a device or gateway natively speaks the command API, the decoding and translating steps can be skipped. We could end our journey here and incorporate a Complex Event Processor BRMS (like Drools) to handle our logic, but there are several key challenges that most Complex Event Processor (CEP) services cannot handle with the massive streaming input that we’d expect to see from a robust, city-wide IoT network.
First, current CEP offerings are monolithic. They’ve yet to embrace a distributed deployment and synchronization design that is best suited for auto-scaling architectures (especially serverless).
Second, to support the expected throughput, most CEPs would need massive memory footprints that are not practical and introduce significant state-maintenance risk.
Third, CEP rules require a predefined understanding of the input structure and won’t easily allow meaning to be derived from previously unknown devices. Thusly, without a CEP, we need one further step to help with analytics in event-driven architecture: aggregation.
Is a view of objects in an event-driven architecture at a moment in time defined by the sequence of events before it. We’re creating an aggregate view of an object based on seen events. If humans lived in an event-driven architecture, we could, in theory, ‘rebuild’ the person by replaying all events on his or her Facebook timeline. Typically, aggregation is used to maintain traditional objects within an event-driven architecture. For example, we could keep track of health message events to build a view of a specific deployed device on our IoT network. However, we’ve found that aggregation of correlated events provides a powerful and generic way to view data entering a platform. In other words, we’re making a larger event by aggregating together smaller events. These larger events allow us to find meaning and drive action.
Events of Events
Applying temporal reasoning within a computer system is best done through the context of intervals. We gain insights and meaning by grouping smaller events into a larger event interval. For example, if a particular larger event is a lecture, the lecture itself can contain many smaller events. By creating intervals, we can store knowledge about generic events that can help drive analytics or system behavior. We can build a simple tagging system to allow us to manage business rules on a per entity basis. However, before we can start acting on intervals, we must find a consistent way to create and expand them. Event-driven architectures accommodate aggregation. A system must simply select a granularity that matches its domain. For example Smart City domain that focuses on a standard site stratification (regions -> sites -> sectors -> units). Goal is to start aggregating events based on these levels of granularity.
For example, grouping events based at a site level. Then aggregate incoming device events that share a common site. The domain should define the interval structure. For a site, this may be shifts or workdays. For a unit, this may be individual occupancies or worksets. We see the flow of IoT events as aggregated intervals. Since incoming messages are all commands, we can ingest data from any external services, not just devices. A common example would be reservation information provided by an external payment app. To gain knowledge about normalized IoT data, an event-driven architecture must simply pick domain angularities and generate intervals (grouping events) based on that design.
Normalizing aspirationally allows us to find meaningful relationships between seemingly disparate data. For event-driven architecture, we can use these relationships to easily create intervals of collected events. These intervals allow access to business logic and analytics based on powerful temporal reasoning. This reasoning is made far simpler by creating edges to aggregates associated with events contained in these intervals, essentially creating a graph between grouped events and related entities.