mirror of
https://github.com/jomjol/AI-on-the-edge-device.git
synced 2025-12-07 20:16:55 +03:00
* Testcase for #2145 and debug-log (#2151) * new models ana-cont-11.0.5, ana-class100-1.5.7, dig-class100-1.6.0 * Testcase for #2145 Added debug log, if allowNegativeRates is handeled * Fix timezone config parser (#2169) * make sure to parse the whole config line * fix crash on empty timezone parameter --------- Co-authored-by: CaCO3 <caco@ruinelli.ch> * Enhance ROI pages (#2161) * Check if the ROIs are equidistant. Only if not, untick the checkbox * renaming * Check if the ROIs have same y, dy and dx. If so, tick the sync checkbox * only allow editing space when box is checked * fix sync check * show inner frame on all ROIs * cleanup * Check if the ROIs have same dy and dx. If so, tick the sync checkbox * checkbox position * renaming * renaming * show inner frame and cross hairs on all ROIs * update ROIs on ticking checkboxes * show timezone hint * fix deleting last ROI * cleanup --------- Co-authored-by: CaCO3 <caco@ruinelli.ch> * restart timeout on progress, catch error (#2170) * restart timeout on progress, catch error * . --------- Co-authored-by: CaCO3 <caco@ruinelli.ch> * BugFix #2167 * Release 15.1 preparations (#2171) * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update changelog * Fix links to PR * Formating * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md --------- Co-authored-by: Slider0007 <jobbelle@gmx.net> Co-authored-by: Slider0007 <115730895+Slider0007@users.noreply.github.com> * fix typo * Replace relative documentation links with absolute ones pointing to the external documentation (#2180) Co-authored-by: CaCO3 <caco@ruinelli.ch> * Sort model files in configuration combobox (#2189) * new models ana-cont-11.0.5, ana-class100-1.5.7, dig-class100-1.6.0 * Testcase for #2145 Added debug log, if allowNegativeRates is handeled * Sort model files in combobox * reboot task - increase stack size (#2201) Avoid stack overflow * Update interface_influxdb.cpp * Update Changelog.md * Show PSRAM usage (#2206) * centralize PSRAM usage (application code only) * update logging * update logging * fix use after free * initialize buffer * free rgb_image before ussing it for new allocation * use wrapper function * switch log level to debug * . * undo adding free() calls * . * add names to all CImage instances * . * . * . * revert changes of stbi_image_free() with free_psram_heap() on the places where is is not in PSRAM * . * typos * typo * Added MQTT Outbox explanation/warning * added CONFIG_SPIRAM_USE_MEMMAP explanation --------- Co-authored-by: CaCO3 <caco@ruinelli.ch> * Disable custom MQTT Outbox. This also moves the MQTT Publishing memory usage back to the internal RAM! * log MQTT connection refused reasons (#2216) * Revert PSRAM usage as it lead to memory fragmentation. See https://github.com/jomjol/AI-on-the-edge-device/issues/2200 for details * fix missing value data * Revert PSRAM usage as it lead to memory fragmentation. (#2224) See https://github.com/jomjol/AI-on-the-edge-device/issues/2200 for details Co-authored-by: CaCO3 <caco@ruinelli.ch> * Fix missing value data in graph (#2230) * fix missing value data --------- Co-authored-by: CaCO3 <caco@ruinelli.ch> * Update Changelog.md (#2231) * Update interface_influxdb.cpp (#2233) * update copyright year * Cleanup * Set prevalue using MQTT + set prevalue to RAW value (REST+MQTT) (#2252) * Use double instead of float * Error handling + set to RAW if newvalue < 0 * REST SetPrevalue: Set to RAW if newvalue < 0 * set prevalue with MQTT * removed the stb_image files and re-add them as a submodule. (#2223) - stb_image.h: Version update 2.25 -> 2.28 - stb_resize.h: Version update 0.96 -> 0.97 - stb_write.h: Version update 1.14 -> 1.16 Co-authored-by: CaCO3 <caco@ruinelli.ch> * Remove obsolete ClassFlowWriteList (#2264) * Renaming & cleanup of some modules / functions in source code (#2265) * Rename module tag name * Rename server_tflite.cpp -> MainFlowControl.cpp * Remove redundandant MQTTMainTopic function * Update * Remove obsolete GetMQTTMainTopic * Fix last element missing in digit model drop down (#2282) * Debug influxdb (#2283) * Fix time offset issues in InfluxDB component. (#2278) Closes #2273 Closes #2150 * Update interface_influxdb.cpp * Update interface_influxdb.cpp * Improve Logging * Implement TimeSync at beginning * Update time_sntp.cpp * Update time_sntp.cpp * Set Time After WLAN Init --------- Co-authored-by: Antonin Delpeuch <antonin@delpeuch.eu> * Implement a camera livestream handler (#2286) * fix leading NaN (#2310) * analogROI: Activate save button after ROI creation (#2326) * Migration of PlatformIO 5.2.0 to 6.1.0 (resp. ESP IDF from 4.4.2 to 5.0.1) (#2305) * Migration to PlatformIO 6.1.0 * Disable RMTMEM usage as it is no longer allowed -> Smart LEDs not functional! * moved miniz into subfolder of jomjol_fileserver_ota, else it does not build anymore. * cleanup * fix leading NaN (#2310) * Migration to PlatformIO 6.1.0 * Disable RMTMEM usage as it is no longer allowed -> Smart LEDs not functional! * moved miniz into subfolder of jomjol_fileserver_ota, else it does not build anymore. * cleanup * Task watchdog has new config name * Fix return value check. It must be something else than ESP_FAIL, but it does not need to be ESP_OK! * add missing strucures to work around new RMTMEM restriction (untested) --------- Co-authored-by: CaCO3 <caco@ruinelli.ch> * Keep MainFlowTask alive to handle reboot (#2325) * Shared PSRAM memory (#2285) * enable PSRAM logging * add extra functions for psram shared memroy handling * CImageBasis objects still should used dynamic memory (eg. rawImage), haw ever tmpImage must be placed inside the shared memory * Place all STBI allocs inside the shared memory * The models are placed in the shared PSRAM reagion and must be allocated through the dedicated functions * . * renaming * fix cast warning * add flag to switch STBI PSRAM usage * improve PSRAM shared handling * reserve shared PSRAM as early as possible * init logging eralier so we can use it in PSRAM shared alloc * move Wifi_LWIP, BSS_SEG and MQTT Outbox into PSRAM to ffree internal memory * Check if model fits into reserved shared memory * Update code/components/jomjol_tfliteclass/CTfLiteClass.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_flowcontroll/ClassFlowControll.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_image_proc/CImageBasis.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * . * . * . * . * Korrektur Merge Conflict in main.cpp --------- Co-authored-by: CaCO3 <caco@ruinelli.ch> Co-authored-by: jomjol <30766535+jomjol@users.noreply.github.com> * fix PSRAM init return value check * Extend InfluxDBv1 with individual topic names (#2319) * Implement individual influx topic * Update interface_influxdb.cpp * Update interface_influxdb.cpp * Update FieldName * analogROI: Activate save button after ROI creation (#2326) * Migration of PlatformIO 5.2.0 to 6.1.0 (resp. ESP IDF from 4.4.2 to 5.0.1) (#2305) * Migration to PlatformIO 6.1.0 * Disable RMTMEM usage as it is no longer allowed -> Smart LEDs not functional! * moved miniz into subfolder of jomjol_fileserver_ota, else it does not build anymore. * cleanup * fix leading NaN (#2310) * Migration to PlatformIO 6.1.0 * Disable RMTMEM usage as it is no longer allowed -> Smart LEDs not functional! * moved miniz into subfolder of jomjol_fileserver_ota, else it does not build anymore. * cleanup * Task watchdog has new config name * Fix return value check. It must be something else than ESP_FAIL, but it does not need to be ESP_OK! * add missing strucures to work around new RMTMEM restriction (untested) --------- Co-authored-by: CaCO3 <caco@ruinelli.ch> * Keep MainFlowTask alive to handle reboot (#2325) * Shared PSRAM memory (#2285) * enable PSRAM logging * add extra functions for psram shared memroy handling * CImageBasis objects still should used dynamic memory (eg. rawImage), haw ever tmpImage must be placed inside the shared memory * Place all STBI allocs inside the shared memory * The models are placed in the shared PSRAM reagion and must be allocated through the dedicated functions * . * renaming * fix cast warning * add flag to switch STBI PSRAM usage * improve PSRAM shared handling * reserve shared PSRAM as early as possible * init logging eralier so we can use it in PSRAM shared alloc * move Wifi_LWIP, BSS_SEG and MQTT Outbox into PSRAM to ffree internal memory * Check if model fits into reserved shared memory * Update code/components/jomjol_tfliteclass/CTfLiteClass.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_flowcontroll/ClassFlowControll.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_image_proc/CImageBasis.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * Update code/components/jomjol_helper/psram.cpp * . * . * . * . * Korrektur Merge Conflict in main.cpp --------- Co-authored-by: CaCO3 <caco@ruinelli.ch> Co-authored-by: jomjol <30766535+jomjol@users.noreply.github.com> * fix PSRAM init return value check * Extend incl. indiv. Measurement * Implement UX * Update ClassFlowInfluxDBv2.cpp * Implement individual influx topic * Update interface_influxdb.cpp * Update interface_influxdb.cpp * Update FieldName * Extend incl. indiv. Measurement * Implement UX * Update ClassFlowInfluxDBv2.cpp * Update main.cpp --------- Co-authored-by: Slider0007 <115730895+Slider0007@users.noreply.github.com> Co-authored-by: CaCO3 <caco3@ruinelli.ch> Co-authored-by: CaCO3 <caco@ruinelli.ch> * Update Changelog.md (#2344) * Update Changelog.md * Update Changelog.md * merge conflicts * merge conflicts * update changelog --------- Co-authored-by: Frank Haverland <fspapaping@googlemail.com> Co-authored-by: CaCO3 <caco@ruinelli.ch> Co-authored-by: jomjol <30766535+jomjol@users.noreply.github.com> Co-authored-by: Slider0007 <jobbelle@gmx.net> Co-authored-by: Slider0007 <115730895+Slider0007@users.noreply.github.com> Co-authored-by: Antonin Delpeuch <antonin@delpeuch.eu>
336 lines
7.7 KiB
C++
336 lines
7.7 KiB
C++
#include "CTfLiteClass.h"
|
|
#include "ClassLogFile.h"
|
|
#include "Helper.h"
|
|
#include "psram.h"
|
|
#include "esp_log.h"
|
|
#include "../../include/defines.h"
|
|
|
|
#include <sys/stat.h>
|
|
|
|
// #define DEBUG_DETAIL_ON
|
|
|
|
|
|
static const char *TAG = "TFLITE";
|
|
|
|
float CTfLiteClass::GetOutputValue(int nr)
|
|
{
|
|
TfLiteTensor* output2 = this->interpreter->output(0);
|
|
|
|
int numeroutput = output2->dims->data[1];
|
|
if ((nr+1) > numeroutput)
|
|
return -1000;
|
|
|
|
return output2->data.f[nr];
|
|
}
|
|
|
|
|
|
int CTfLiteClass::GetClassFromImageBasis(CImageBasis *rs)
|
|
{
|
|
if (!LoadInputImageBasis(rs))
|
|
return -1000;
|
|
|
|
Invoke();
|
|
|
|
return GetOutClassification();
|
|
}
|
|
|
|
|
|
int CTfLiteClass::GetOutClassification(int _von, int _bis)
|
|
{
|
|
TfLiteTensor* output2 = interpreter->output(0);
|
|
|
|
float zw_max;
|
|
float zw;
|
|
int zw_class;
|
|
|
|
if (output2 == NULL)
|
|
return -1;
|
|
|
|
int numeroutput = output2->dims->data[1];
|
|
//ESP_LOGD(TAG, "number output neurons: %d", numeroutput);
|
|
|
|
if (_bis == -1)
|
|
_bis = numeroutput -1;
|
|
|
|
if (_von == -1)
|
|
_von = 0;
|
|
|
|
if (_bis >= numeroutput)
|
|
{
|
|
ESP_LOGD(TAG, "NUMBER OF OUTPUT NEURONS does not match required classification!");
|
|
return -1;
|
|
}
|
|
|
|
zw_max = output2->data.f[_von];
|
|
zw_class = _von;
|
|
for (int i = _von + 1; i <= _bis; ++i)
|
|
{
|
|
zw = output2->data.f[i];
|
|
if (zw > zw_max)
|
|
{
|
|
zw_max = zw;
|
|
zw_class = i;
|
|
}
|
|
}
|
|
return (zw_class - _von);
|
|
}
|
|
|
|
|
|
void CTfLiteClass::GetInputDimension(bool silent = false)
|
|
{
|
|
TfLiteTensor* input2 = this->interpreter->input(0);
|
|
|
|
int numdim = input2->dims->size;
|
|
if (!silent) ESP_LOGD(TAG, "NumDimension: %d", numdim);
|
|
|
|
int sizeofdim;
|
|
for (int j = 0; j < numdim; ++j)
|
|
{
|
|
sizeofdim = input2->dims->data[j];
|
|
if (!silent) ESP_LOGD(TAG, "SizeOfDimension %d: %d", j, sizeofdim);
|
|
if (j == 1) im_height = sizeofdim;
|
|
if (j == 2) im_width = sizeofdim;
|
|
if (j == 3) im_channel = sizeofdim;
|
|
}
|
|
}
|
|
|
|
|
|
int CTfLiteClass::ReadInputDimenstion(int _dim)
|
|
{
|
|
if (_dim == 0)
|
|
return im_width;
|
|
if (_dim == 1)
|
|
return im_height;
|
|
if (_dim == 2)
|
|
return im_channel;
|
|
|
|
return -1;
|
|
}
|
|
|
|
|
|
int CTfLiteClass::GetAnzOutPut(bool silent)
|
|
{
|
|
TfLiteTensor* output2 = this->interpreter->output(0);
|
|
|
|
int numdim = output2->dims->size;
|
|
if (!silent) ESP_LOGD(TAG, "NumDimension: %d", numdim);
|
|
|
|
int sizeofdim;
|
|
for (int j = 0; j < numdim; ++j)
|
|
{
|
|
sizeofdim = output2->dims->data[j];
|
|
if (!silent) ESP_LOGD(TAG, "SizeOfDimension %d: %d", j, sizeofdim);
|
|
}
|
|
|
|
|
|
float fo;
|
|
|
|
// Process the inference results.
|
|
int numeroutput = output2->dims->data[1];
|
|
for (int i = 0; i < numeroutput; ++i)
|
|
{
|
|
fo = output2->data.f[i];
|
|
if (!silent) ESP_LOGD(TAG, "Result %d: %f", i, fo);
|
|
}
|
|
return numeroutput;
|
|
}
|
|
|
|
|
|
void CTfLiteClass::Invoke()
|
|
{
|
|
if (interpreter != nullptr)
|
|
interpreter->Invoke();
|
|
}
|
|
|
|
|
|
bool CTfLiteClass::LoadInputImageBasis(CImageBasis *rs)
|
|
{
|
|
#ifdef DEBUG_DETAIL_ON
|
|
LogFile.WriteHeapInfo("CTfLiteClass::LoadInputImageBasis - Start");
|
|
#endif
|
|
|
|
unsigned int w = rs->width;
|
|
unsigned int h = rs->height;
|
|
unsigned char red, green, blue;
|
|
// ESP_LOGD(TAG, "Image: %s size: %d x %d\n", _fn.c_str(), w, h);
|
|
|
|
input_i = 0;
|
|
float* input_data_ptr = (interpreter->input(0))->data.f;
|
|
|
|
for (int y = 0; y < h; ++y)
|
|
for (int x = 0; x < w; ++x)
|
|
{
|
|
red = rs->GetPixelColor(x, y, 0);
|
|
green = rs->GetPixelColor(x, y, 1);
|
|
blue = rs->GetPixelColor(x, y, 2);
|
|
*(input_data_ptr) = (float) red;
|
|
input_data_ptr++;
|
|
*(input_data_ptr) = (float) green;
|
|
input_data_ptr++;
|
|
*(input_data_ptr) = (float) blue;
|
|
input_data_ptr++;
|
|
}
|
|
|
|
#ifdef DEBUG_DETAIL_ON
|
|
LogFile.WriteHeapInfo("CTfLiteClass::LoadInputImageBasis - done");
|
|
#endif
|
|
|
|
return true;
|
|
}
|
|
|
|
|
|
bool CTfLiteClass::MakeAllocate()
|
|
{
|
|
static tflite::AllOpsResolver resolver;
|
|
|
|
#ifdef DEBUG_DETAIL_ON
|
|
LogFile.WriteHeapInfo("CTLiteClass::Alloc start");
|
|
#endif
|
|
|
|
LogFile.WriteToFile(ESP_LOG_DEBUG, TAG, "CTfLiteClass::MakeAllocate");
|
|
this->interpreter = new tflite::MicroInterpreter(this->model, resolver, this->tensor_arena, this->kTensorArenaSize, this->error_reporter);
|
|
|
|
if (this->interpreter)
|
|
{
|
|
TfLiteStatus allocate_status = this->interpreter->AllocateTensors();
|
|
if (allocate_status != kTfLiteOk) {
|
|
TF_LITE_REPORT_ERROR(error_reporter, "AllocateTensors() failed");
|
|
LogFile.WriteToFile(ESP_LOG_ERROR, TAG, "AllocateTensors() failed");
|
|
|
|
this->GetInputDimension();
|
|
return false;
|
|
}
|
|
}
|
|
else
|
|
{
|
|
LogFile.WriteToFile(ESP_LOG_ERROR, TAG, "new tflite::MicroInterpreter failed");
|
|
LogFile.WriteHeapInfo("CTfLiteClass::MakeAllocate-new tflite::MicroInterpreter failed");
|
|
return false;
|
|
}
|
|
|
|
|
|
#ifdef DEBUG_DETAIL_ON
|
|
LogFile.WriteHeapInfo("CTLiteClass::Alloc done");
|
|
#endif
|
|
|
|
return true;
|
|
}
|
|
|
|
|
|
void CTfLiteClass::GetInputTensorSize()
|
|
{
|
|
#ifdef DEBUG_DETAIL_ON
|
|
float *zw = this->input;
|
|
int test = sizeof(zw);
|
|
ESP_LOGD(TAG, "Input Tensor Dimension: %d", test);
|
|
#endif
|
|
}
|
|
|
|
|
|
long CTfLiteClass::GetFileSize(std::string filename)
|
|
{
|
|
struct stat stat_buf;
|
|
long rc = stat(filename.c_str(), &stat_buf);
|
|
return rc == 0 ? stat_buf.st_size : -1;
|
|
}
|
|
|
|
|
|
bool CTfLiteClass::ReadFileToModel(std::string _fn)
|
|
{
|
|
LogFile.WriteToFile(ESP_LOG_DEBUG, TAG, "CTfLiteClass::ReadFileToModel: " + _fn);
|
|
|
|
long size = GetFileSize(_fn);
|
|
|
|
if (size == -1)
|
|
{
|
|
LogFile.WriteToFile(ESP_LOG_ERROR, TAG, "Model file doesn't exist: " + _fn + "!");
|
|
return false;
|
|
}
|
|
else if(size > MAX_MODEL_SIZE) {
|
|
LogFile.WriteToFile(ESP_LOG_ERROR, TAG, "Unable to load model '" + _fn + "'! It does not fit in the reserved shared memory in PSRAM!");
|
|
return false;
|
|
}
|
|
|
|
LogFile.WriteToFile(ESP_LOG_DEBUG, TAG, "Loading Model " + _fn + " /size: " + std::to_string(size) + " bytes...");
|
|
|
|
|
|
#ifdef DEBUG_DETAIL_ON
|
|
LogFile.WriteHeapInfo("CTLiteClass::Alloc modelfile start");
|
|
#endif
|
|
|
|
modelfile = (unsigned char*)psram_get_shared_model_memory();
|
|
|
|
if(modelfile != NULL)
|
|
{
|
|
FILE* f = fopen(_fn.c_str(), "rb"); // previously only "r
|
|
fread(modelfile, 1, size, f);
|
|
fclose(f);
|
|
|
|
#ifdef DEBUG_DETAIL_ON
|
|
LogFile.WriteHeapInfo("CTLiteClass::Alloc modelfile successful");
|
|
#endif
|
|
|
|
return true;
|
|
}
|
|
else
|
|
{
|
|
LogFile.WriteToFile(ESP_LOG_ERROR, TAG, "CTfLiteClass::ReadFileToModel: Can't allocate enough memory: " + std::to_string(size));
|
|
LogFile.WriteHeapInfo("CTfLiteClass::ReadFileToModel");
|
|
|
|
return false;
|
|
}
|
|
}
|
|
|
|
|
|
bool CTfLiteClass::LoadModel(std::string _fn)
|
|
{
|
|
#ifdef SUPRESS_TFLITE_ERRORS
|
|
this->error_reporter = new tflite::OwnMicroErrorReporter;
|
|
#else
|
|
this->error_reporter = new tflite::MicroErrorReporter;
|
|
#endif
|
|
|
|
LogFile.WriteToFile(ESP_LOG_DEBUG, TAG, "CTfLiteClass::LoadModel");
|
|
|
|
if (!ReadFileToModel(_fn.c_str())) {
|
|
return false;
|
|
}
|
|
|
|
model = tflite::GetModel(modelfile);
|
|
|
|
if(model == nullptr)
|
|
return false;
|
|
|
|
return true;
|
|
}
|
|
|
|
|
|
CTfLiteClass::CTfLiteClass()
|
|
{
|
|
this->model = nullptr;
|
|
this->modelfile = NULL;
|
|
this->interpreter = nullptr;
|
|
this->input = nullptr;
|
|
this->output = nullptr;
|
|
this->kTensorArenaSize = TENSOR_ARENA_SIZE;
|
|
this->tensor_arena = (uint8_t*)psram_get_shared_tensor_arena_memory();
|
|
}
|
|
|
|
|
|
CTfLiteClass::~CTfLiteClass()
|
|
{
|
|
delete this->interpreter;
|
|
delete this->error_reporter;
|
|
|
|
psram_free_shared_tensor_arena_and_model_memory();
|
|
}
|
|
|
|
|
|
namespace tflite
|
|
{
|
|
int OwnMicroErrorReporter::Report(const char* format, va_list args)
|
|
{
|
|
return 0;
|
|
}
|
|
}
|