[X-cube-AI] How could i use the external QSPI & SDRAM in order to speed up the inference ?
- March 16, 2021
- 3 replies
- 4462 views
What I have checked through the user manual UM2526 and FP-AI-VISION1 examples is that I am using X-cube-ai version 6.0.0 for the 32bit floating point model (the model data type I am currently using), so I want to read data from external SDRAM in order to reduce inference time. I know that it is optimized so that the inference time is faster than reading from the internal flash memory. Currently, the model I trained is Keras 2.3.0, which is not compatible with X-cube-ai 5.0.0.
Purpose : To reduce the inference time, I want to save the initial weight&bias table to the external Q-SPI flash memory and move it to SDRAM as in the example.
Question 1 : The inference time is faster when it is placed in the external memory when the data type is float, but in the table12, internal flash memory shows faster results , I don’t understand why. (reference : UM2611 manual)
Q2 : In general, what clock delay is the read time latency of the internal flash memory?
Q3 : If I use external SDRAM for read operation, can I expect to decrease inference time? And How much ? Clock cycles / MACC is too slow that I expected.
•In order to verify the above mentioned steps, what I am currently curious about is how to program and read/write the data required for external QSPI memory and SDRAM.
•I would like to know a series of steps from setting up pins in cube mx to generating code to the IDE , how to run it in the IDE and verifying that it is properly programmed.
•Thanks.
