As IoT becomes essential to the transportation industry, TransportDeck, Outcomex’s end-to-end IoT solution for this sector is on track to revolutionise how transport networks operate using Ai/ML cameras.
Outcomex have submitted our first patent application within Australia. Currently in its Provisional Patent status, the application is to ensure the protection of our in-house built artificial intelligence (AI) and machine learning (ML) software for surveillance cameras. This software can be integrated with most smart cameras deployed in public and private transport locations to provide essential monitoring and security.
Making mass produced Smart cameras smarter
Most IP based cameras with streaming capabilities (also known as Object Tracking capable cameras) can detect and count objects in high traffic regions. These cameras stream the video feeds into the cloud and apply Convolutional Neural Network frameworks (a Deep Learning/Artificial Intelligence framework used to analyse visual imagery) to process the detections and tracking while the metrics are in the cloud.
However, we find that it is much cheaper and efficient to do all the detections and counting at the edge* and then send only the required metrics to the cloud. Also, the capabilities of industry cameras alone are not comprehensive enough to solve major analytical use cases the transport industry require, such as accurate people/vehicle wait time, occupancy metrics at bays and accurate vehicle boarding details (how many passengers alighted and boarded a particular vehicle).
*at the edge: processing data at the initial source and only sending the significant data into the cloud. This process creates a lower payload traffic into cloud, reduces both the bandwidth and compute required to transmit data into cloud, and reduces the privacy of data concerns as less data is being transmitted into the cloud.
What’s in our patented software? Ai/ML Camera Capabilties
Our software design, which integrates with object tracking cameras, uses AI based techniques to automatically and accurately gather information required to assist with challenges faced by the public transport industry. It also allows all the data collected to be processed on the edge, before sending the essential data to the cloud.
The software design focuses on implementing AI/ML capabilities into the existing object detection and tracking systems in IP cameras. From here, the metrics collected from the camera’s existing detection systems computes with the open-source ML frameworks in our software as it passes through a virtual Area of interest (AOI) Polygon in the camera’s field of view. The data is then filtered at the edge so only the significant data is sent to the cloud and then to the application.
Area of Interest (AOI): a section in the field of view which is considered important and is focused on when collecting data. Eg; a particular taxi rank, entry point or drop off zone. Smart cameras have the ability to focus more on an AOI by providing higher quality images and video recordings in these areas, while the rest of the FOV outside of the AOI with lower importance is recorded in a compressed, lower quality recording to save on bandwidth. Area of Interest is also referred as ROI: Region of Interest.
Field of View (FOV): the area of coverage that the camera can ‘see’ within a scene. A FOV depends on the height and angle of the camera and size of the lens. This can determine how wide the camera can see and how far.
What can our smarter cameras do?
Cameras with our software are now able to automatically and accurately gather information required to assist with challenges faced by the transport industry.
The data gathered includes:
- Real-time view of how many passengers, cars, buses, etc. are waiting or using a particular bay.
- Average number of passengers and vehicles occupying bays during different durations of time, ranging from minutes to 24 hours.
- Average number of passengers and vehicles occupying bays during different times of the day.
- Average number of passengers boarding and alighting vehicles such as buses, taxis, and cars at a particular bay.
- Number of vehicles overstaying at particular bays during different time periods, along with overstay alerting.
- Instances when parking and bus bays were occupied in full during different time intervals.
- Predicting the usage patterns of different bays at different times of the day and at different days of the week.
To accurately and efficiently do this, our software can:
1. Limit the number of frames per second of the footage.
This reduces the compute required to process data and reduces the bandwidth when sending data into the cloud.
Why: While Smart Cameras are relatively fine for quiet environments, high traffic density places like pick up drop off (PUDO) bays, taxi ranks, and bus stops can be very busy and consume a large amount of bandwidth when ingesting the data captured. Processing this large amount of data requires an excessive amount of compute power to process so many payloads on the edge.
2. Distinguish specific objects that need to be tracked and filter out “noise”
Focuses the detections in the AOI such as specific parking bays, traffic lights, etc. Therefore, objects outside the AOI are not counted. Our algorithm automatically identifies if there has been a change in the position of the AOI, which implies the camera has moved and will automatically calibrate and come up with new AOI coordinates.
*Object: object required to be detected for counting ie: people and vehicles
Why: Object detection and tracking cameras usually detect all objects encountered in the whole field of view of the camera. This means that a lot of unwanted ‘noise traffic’ that should not be counted in the readings that gets detected. Some physical infrastructures near the AOI can make it difficult to mount cameras at the right position to effectively focus on a particular AOI. Our algorithm allows for the camera to be mounted at the ideal angle to the AOI, then filters out the ‘noise traffic’.
Another approach is to create AOI boundaries in the field of view is to virtually map it out in the FOV and only consider the detection objects within the spatiacoordinates
3. Monitor wait times
All the objects detected within the AOI are fed into their corresponding data pipelines to generate occupancy and wait times.
Why: Councils can use this information to assess the traffic and pedestrian wait time and make informed decisions around improving the area. Improvements include adjusting pedestrian crossing times accordingly during peak hours and traffic lights for vehicles.
4. Monitor vehicle occupancy times
Stationary objects such as parked vehicles can be detected within the AOI where the length of time an object has been stationary for is also monitored. Alerts can then be sent to customers if the object has been stationary for over the threshold period, suggesting overstay.
Why: Councils can gain information about parking occupancy rates, which allows them to adjust parking times to get optimal use from the parking spaces. Receiving alerts of when vehicles have stayed beyond the parking limit will help councils enforce parking limits and encourage parking turnover.
5. Collect trip details and derive the number of passengers who board and alight from a particular vehicles (bus, taxi, etc.)
Cameras can detect and count objects such as different vehicles types as well as count how many people board and alight these vehicles during different times of the day. Councils can use this information to keep track of vehicle usage trends during certain times throughout the day.
Why: Knowing how public transport is used in the community allows councils to make educated decisions on improving their transport system. Improvements include adjusting bus frequency and timetables to cater to peak hour commuters, allocating more taxi zones and implementing traffic controls to ensure commuters and vehicle congestion is avoided during peak hours.
6. Detect collisions between a vehicle and person in an AOIP.
Cameras can detect objects in the AOI during collisions and define whether the object is a vehicle or person.
Why: Councils can determine the causes of reoccurring accidents at certain AOI (traffic lights, drop off bays, bus zones, etc.) and make educated decisions on implementing road and traffic management plans to reduce accidents.
With these advanced capabilities integrated into TransportDeck, Outcomex can provide the transport industry with end-to-end solutions that offer smart and accurate metrics, giving them a better understanding of traffic patterns and in turn helping improve transport network operations and better planning of their resources.
In the next 12 months, Outcomex look into fine tuning the patent’s algorithms, in particularly the trip detection feature. In addition, we will be advancing our algorithms to detect and differentiate between models of cars, types of taxis (standard, wheel-chair accessible etc), and the age and gender of passengers.
We will also be integrating our software into other features outside the transport industry where object counting and detection is required, such as at large-crowd venues, stadiums and event environments. Features includes tracking queue lengths and queue movement to estimate waiting times and people counting to manage venue capacity.
Outcomex also have plans to develop and patent new software using AI/ML capabilities which will cater to the retail industry and assist with use cases such as security and theft detection.
TransportDeck is a complete IoT solution designed for the transport industry. Where some technology and service providers only offer one component, TransportDeck provides end-to-end capabilities. This includes sensors and devices, network and connectivity, data processing and the TransportDeck application.
Read more about TransportDeck here