03/02/2022 | Spotlight
The recent rush to develop effective tests for COVID-19 and the successful roll-out of the vaccination programme has put unprecedented pressure on laboratory staff globally. This has highlighted the need for increased automation, but these environments differ markedly from factory floors, meaning that robots often need to work in confined spaces and integrate more closely with humans.
I’m involved within a science partnership exploring how laboratory staff will optimise their interaction with automation and instrumentation in the future. In particular, how gesture controls may be used with collaborative robotic systems to ensure a safe and confident working environment.
Research suggests that voice and physical gestures are key components to communicate with collaborative robots. But the modern laboratory often has high background acoustic noise from environmental control systems and benchtop equipment, such as centrifuges and shakers. They are often busy places with many staff not only performing experiments but using the dynamic environment to exchange ideas. So, the idea that this space is shared with robots that are instructed and controlled solely by voice commands is therefore an unattractive prospect, except in very specific use cases.
For this reason, isolated physical gestures that don’t require additional voice commands should be seen as a key area of investigation. We are currently investigating how collaborative robots may be controlled by physical gestures, whilst also using their own status. As an illustration, collaborative robotics may be taught to recognise gestures such as Halt! and Start! – staff to recognise gestures generated by the robot such as Sleep, System Standby and Error.
Recognition of such physical gestures also opens up the possibility of robots communicating in groups to streamline processes and also improve efficiency. Recent research has focused on the use of data-gloves to determine gestures, but this is not practical in labs where staff need to be hands-free and often wear specialised protective gloves. For this reason, we have taken a multi-sensor approach to detecting physical gestures, including vision sensors. A helpful factor when trying to monitor gestures in a laboratory is that the lighting is generally of high luminosity and consistent, allowing visual images of high quality. Also the short distance between robots and staff means the physical effort required to generate a gesture is low and the field of view of the sensors well-focused. To enable a multi-sensor array to be retrofitted to existing robotics, our research is investigating how sensors may be configured as a “wearable” sleeve. We see this as an effective approach as it’s not robot-specific, allowing implementation on a very wide range of robot types. It is also driving innovation in how we think about the re-design of the sensors.
An important consideration is that the sleeve must be cleanable using standard ethanol/water mixtures and laboratory disinfectants, and sterilisable using UV or hydrogen peroxide. To provide feedback to user that the collaborative robotics has responded to a gesture, lab staff could be provided with a wearable device themselves. This approach will potentially be helped by the proliferation of consumer digital wearable devices that already exist, such as smart watches.
Data from the multiple sensors can be overlayed to create a map of the scene from which the region of interest is extracted. The gesture is determined by matching patterns with those stored in a gesture database. The appropriate response for the specific gesture is determined and instructions sent to the robot controller. Generalised features characterising the gesture include size and arc, plane, speed and abruptness of movement. The larger the feature set, the greater the elimination of irrelevant gestures, but also the longer processing required.
The richness of the information contained in the database is key and research is now considering the benefits of a machine learning approach. It is interesting to consider if this should be built upon data collected from a large number of random staff, or only those staff permitted to use the system.
From the set of gestures, staff can communicate commands to the robot and the robot can communicate its own status to staff. Early adoption of collaborative robotics is well suited to laboratory applications as processes are generally well defined and structured. There is no great need to communicate subtly, as might be the case in applications with less formal structure such as personalised care or education, where the range of gestures would need to be expanded.
It would perhaps be interesting to explore the scalability of our approach with additional peripherals such as data gloves, voice commands and augmented reality applications. As the population grows, we will enter the era of “big health” requiring complex analysis of millions of samples drawn from across the population. This will create a demand for extensive laboratory automation to remove repetitive tasks from humans. There will be increasing pressure to bring robotics out from behind large and expensive enclosures to utilise laboratory space more efficiently and streamline processes, and this is where collaborative robotics can help.
Key success factors will be ensuring staff see the system as safe and likeable. Perceived safety may have more to do with the response of the robot to the gesture, such as speed, abruptness and displacement of the motion, than to the nature of the gestures themselves.
The challenges are multiple, but the potential rewards in terms of costs and efficiency justify investigation. It is hard to imagine one organisation meeting all the challenges, so strategic partnerships with parties, including potential end-users and equipment suppliers, is key.
The pandemic has highlighted that governments and other agencies need to be fully prepared for such eventualities, which will require investment within technologies such as automation to support the life science supply chain.
By bringing together the end-users, suppliers and funding bodies to address the technology challenges facing the development of collaborative robotics in laboratories, we can create a meaningful impact in the development of new vaccines and therapeutics.
CTO at Plextek and Design Momentum Life Science Partnership
With our newsletter you will receive current information on ACHEMA on a regular basis. You are guaranteed not to miss any important dates.
60486 Frankfurt am Main
Tel.: +49 69 7564-100