Skip to main content
Visitor II
January 7, 2026
Question

Benchmark Validation in App does not match Arduino Uno R4 WiFi

  • January 7, 2026
  • 2 replies
  • 75 views

Hello.

Forgive me for what might be an incredibly "noob" question, but i'm not ashamed to admit that I am, in fact, a noob.

I've been trying and failing and trying and moderately succeeding over the last few days using NanoEdge AI Studio creating an audio classification model.

I'm using an Arduino Uno R4 Wifi.  I also have a MAX4466 connected to A0, GND, and 3.3v

My entire process up until now.
1. Get a sketch running on my arduino which simply does a micValue = analogRead(A0) in a loop for 1024 times.  For each read, i so a Serial.print(micValue); Serial.print(" ").  After 1024 of those, I do a Serial.println("") to go to the next line.

2. With that running on the Arduino, I opened NanoEdge Studio, created my project, recorded a bunch of signals of background_noise, silence, me speaking and another sound that i want to classify.  It's not a HUGE dataset, only a couple of minutes, but i'm increasingly adding more to each Class.

3. I ran the benchmark.

4. The benchmark produced some results in the 90+% range.

5. I performed the validation step with me speaking, some background noise, silence, and the beep and it produced a mostly accurate prediction based on what sound i was currently forcing.  When i wasn't speaking, SILENCE was the  most probable.  When I was speaking, SPEECH was the most probably, when i did some shuffling and other stuff background_noise was the most probable and when i played the other sound, that was the most probable.

6. I "deployed" the library (and saved it to my computer)

I added the library in Arduino and can include NanoEdgeAI.h and knowledge.h fine.

I copied this code from the example file.

float input_user_buffer[DATA_INPUT_USER * AXIS_NUMBER]; // Buffer of input values
float output_class_buffer[CLASS_NUMBER]; // Buffer of class probabilities
uint16_t id_class = 0;
const char *id2class[CLASS_NUMBER + 1] = { // Buffer for mapping class id to class name
	"unknown",
	"speech",
	"silence",
	"braun_infusomat",
	"background_noise",
};

In my loop() i'm executing this code:
 get_microphone_data();

 
 neai_classification(input_user_buffer, output_class_buffer, &id_class);
 Serial.print(id_class);
 Serial.print(": ");
 Serial.println(id2class[id_class]);

 for(int x =0; x <= CLASS_NUMBER; x++)
 {
 Serial.println( output_class_buffer[x]);
 }

 delay(100);

However, the results are nowhere near the accuracy of the test performed on the validation tab in NanoEdgeAI Studio.

 

Any thoughts on what i might be doing wrong?

 

    This topic has been closed for replies.

    2 replies

    edwards09Author
    Visitor II
    January 7, 2026

    Sorry, I also meant to note that i don't move the board OR the microphone when going from emulator to on device.

     

    Technical Moderator
    January 7, 2026

    Welcome @edwards09, to the community!

    I would like to point out that NanoEdge AI Studio only generates code for Cortex-M based MCUs, but your Uno R4 Wifi is equipped with an ESP32-S3  that has an XTensa LX7 core. Therefore, you cannot use NanoEdge AI Studio for your board. Please also see community threads like this one.

    Regards
    /Peter

    Super User
    January 7, 2026

    @Peter BENSCH wrote:

    NanoEdge AI Studio only generates code for Cortex-M based MCUs, but your Uno R4 Wifi is equipped with an ESP32-S3


    The main processor is the Renesas RA4M1 - which is Cortex-M4.

    The ESP32 is a "coprocessor" for the WiFi.

     

    @edwards09  But still, expecting ST to support a non-ST target is unrealistic.

    You need to go to Arduino for support; eg,

    https://docs.arduino.cc/tutorials/nano-33-ble-sense/get-started-with-machine-learning/