Skip to main content
Associate
October 29, 2024
Solved

STSW-IMG035 Gesture and Hand Posture

  • October 29, 2024
  • 1 reply
  • 799 views

Hello, 

I've previously posted asking about gesture and hand posture output and I've finally got an output from both of the model. The problem is the method of getting an output for gesture and hand posture is different. For gesture, I'm able to use GestureEVK software and achieve some results for gesture but for hand posture I'm only able to get an result through local terminal/Putty. Is it possible to integrate both of the processses to have them running on one program that can recognize both gesture and posture?

Or is it possible that there are already methods of have gesture and hand posture running on one program?

Any ideas or hint would be useful! Thank you!

 

Emily

 

 

 

Best answer by labussiy

Hello,

Motion gesture and Hand Posture are 2 different solutions. That's why it's not the same API to get the outputs.

 

Yes, it is possible to merge both solutions. It has been done internally in ST for some demos and it will come in the coming months on st.com. (Motion Gesture + Hand Posture + Smart Presence Detection)

This is something you can do on your side, and I can try to give you few hints:

  • both solutions are using the same VL53L8CX driver (ULD), that makes the merge easier
  • you can use the "motion gesture" C project to start and add Hand Posture files inside
  • add Network_Init(&App_Config) after gesture_library_init_configure();
  • after getting the ranging data, you can call the hand posture functions:

App_Config.RangingData = RangingData;

/* Pre-process data */

Network_Preprocess(&App_Config);

/* Run inference */

Network_Inference(&App_Config);

/* Post-process data */

Network_Postprocess(&App_Config);

 

Then, you will have to merge both outputs, for example you can have this kind of code:

//Write the AI output in the Gesture structure for the GUI EVK

if (evk_label_table[(int)(App_Config.AI_Data.handposture_label)] != 0) {

gest_predictor.gesture.ready = 1;

gest_predictor.gesture.label = evk_label_table[(int)(App_Config.AI_Data.handposture_label)];

}

else{

gest_predictor.gesture.ready = 0;

gest_predictor.gesture.label = 0;

hold_timer = sensor_data.timestamp_ms;

}

That's just some hints, I hope it will help you.

Yann

 

 

1 reply

labussiyBest answer
Technical Moderator
November 6, 2024

Hello,

Motion gesture and Hand Posture are 2 different solutions. That's why it's not the same API to get the outputs.

 

Yes, it is possible to merge both solutions. It has been done internally in ST for some demos and it will come in the coming months on st.com. (Motion Gesture + Hand Posture + Smart Presence Detection)

This is something you can do on your side, and I can try to give you few hints:

  • both solutions are using the same VL53L8CX driver (ULD), that makes the merge easier
  • you can use the "motion gesture" C project to start and add Hand Posture files inside
  • add Network_Init(&App_Config) after gesture_library_init_configure();
  • after getting the ranging data, you can call the hand posture functions:

App_Config.RangingData = RangingData;

/* Pre-process data */

Network_Preprocess(&App_Config);

/* Run inference */

Network_Inference(&App_Config);

/* Post-process data */

Network_Postprocess(&App_Config);

 

Then, you will have to merge both outputs, for example you can have this kind of code:

//Write the AI output in the Gesture structure for the GUI EVK

if (evk_label_table[(int)(App_Config.AI_Data.handposture_label)] != 0) {

gest_predictor.gesture.ready = 1;

gest_predictor.gesture.label = evk_label_table[(int)(App_Config.AI_Data.handposture_label)];

}

else{

gest_predictor.gesture.ready = 0;

gest_predictor.gesture.label = 0;

hold_timer = sensor_data.timestamp_ms;

}

That's just some hints, I hope it will help you.

Yann