Paul Winstanley of the UK Defence Solutions Centre considers whether the concept of sensing is changing with the times.
Nearly 30 years ago, I started my career as a Research Scientist developing laser-based systems. As a user, I quickly became familiar with the concept of sensing, particularly electro-optic (EO) sensing. Here, the objective was to detect, recognise or identify a threat. Thereafter the threat was tracked and an engagement was launched.
The important factors were: (1) the sensor was a physical device that we selected based on the system requirements and (2) we applied a level of processing to transition a sensor output (data) into actionable information.
Wind the clock forward 30 years and we are surrounded by much more data. This is derived not just from sensors, but from sources such as social media and increasingly, from the Internet of Things enabled devices.
I’m of the opinion that this digital data is valuable and I suggest that greater value can be obtained when digital and sensor data are fused.
As an example, consider the 2011 Great East Japan Tsunami. There are papers that have analysed the social media feed (Twitter) surrounding this tragic event[1]. The referenced paper presents data that indicates that Twitter could be a viable Tsunami sensing and warning system. This solution has many attributes: notably it is highly dispersed (thereby allowing a degree of resilience); it is low cost and, as intimated previously, it can also disseminate a warning to those in potential danger.
Of course, one of the disadvantages of this system is that malicious information, at the wrong time (or is this the right time?) can wreak havoc. The detection of deception in social media is an active academic research topic[2]. However, I would suggest that this is not a new problem; if I go back 30 years, our physical sensing solutions had to be robust to the target deploying camouflage, concealment and deception (CCD) techniques – same problem, different implementation.
Where there is significant benefit is in the combination of digital data with real time sensing, with digital data being used to provide context and cueing to optimise the utilisation of expensive, physical sensing assets. Of course, this will require data fusion that has the ability to deal with:
- Data velocity and the ability to deal with batch and streaming data
- Data variety and the ability to deal with structured and unstructured data
- Data volume and the ability to deal with Terabytes to Zettabytes
- Visualisation and the ability to share actionable information. This isn’t a needle in a haystack problem (magnets still work pretty well for this). This is a needle in piles of needles….
What I really think would be effective would be an open architecture that enables the rapid integration, development and support of multiple data inputs, multiple data fusion/processing solutions and multiple visualisations rather than building bespoke solutions.
Paul Winstanley is currently the Executive Director of the UK Defence Solutions Centre. Prior to taking up this role, Paul has worked within Government, Publicly Traded Companies and has founded and ran technology based SMEs. Paul’s passion is to minimise development time, cost and risk by identifying, acquiring and integrating innovative technology and solutions from a diverse range of different market sectors.
[1] https://dro.deakin.edu.au/eserv/DU:30049113/takeokachatfield-twittertsunami-2012.pdf
[2] http://www.stevens.edu/ses/deception-detection-and-gender-identification-from-text