Imported wiki (#2)
* imported wiki * updated image paths * . Co-authored-by: CaCO3 <caco@ruinelli.ch>
29
docs/AI-on-the-edge.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Welcome to the AI-on-the-edge-device wiki!
|
||||
|
||||
Artifical inteligence based systems have been established in our every days live. Just think of speech or image recognition. Most of the systems relay on either powerfull processors or a direct connection to the cloud for doing the calculations up there. With the increasing power of modern processors the AI systems are coming closer to the end user - which is usally called **edge compution**.
|
||||
Here this edge computing is brough into a practical example, where a AI network is implemented on a ESP32 device so: **AI on the edge**.
|
||||
|
||||
**Have fun in studying the new posibilities and ideas**
|
||||
|
||||
This is about image recognition and digitalization, done totally on a cheap ESP32 board using artifical intelligence in form of convolutional neural networks (CNN). Everything, from image capture (OV2640), image preprocessing (auto alignment, ROI idenficiation) all the way down to the image recognition (CNN structure) and result plausiblisation is done on a cheap 10 EUR device.
|
||||
|
||||
This all is integrated in an easy to do setup and use environment, taking care for all the background processing and handling, including regular job scheduler. The user interface is an integrated web server, that can be easily adjusted an offers the data as an API in different options.
|
||||
|
||||
The task to be demonstrated here is an automated readout of an analog water meter. The water consumption is to be recorded within a house automatization and the water meter is totally analog without any electronic interface. Therefore the task is solved by taking regularly an image of the water meter and digitize the reading.
|
||||
|
||||
There are two types of CNN implemented, a classification network for reading the digital numbers and a single output network for digitize the analog pointers for the sub digit readings.
|
||||
|
||||
This project is a evolution of the [water-meter-system-complete](https://github.com/jomjol/water-meter-system-complete), which uses ESP32-CAM just for taking the image and a 1GB-Docker image to run the neural networks backbone. Here everything is integrated in an ESP32-CAM module with 4MB of SDRAM and a SD-Card as data storage.
|
||||
|
||||
|
||||
|
||||
This systems implements several functions:
|
||||
|
||||
* (water) meter readout - it can handle also dual meters with two or even more readings
|
||||
* picture provider
|
||||
* fileserver
|
||||
* OTA functionality
|
||||
* web server
|
||||
|
||||
The details can be found here: [[Integrated Functions]]
|
||||
|
||||
9
docs/Addditional-Information.md
Normal file
@@ -0,0 +1,9 @@
|
||||
The following links point to additional information in other repos:
|
||||
|
||||
# Digits
|
||||
* [Overview](https://github.com/jomjol/neural-network-digital-counter-readout)
|
||||
* [Background](https://github.com/jomjol/neural-network-digital-counter-readout/blob/master/Train_Network.md)
|
||||
|
||||
# Analog
|
||||
* [Overview](https://github.com/jomjol/neural-network-analog-needle-readout)
|
||||
* [Background](https://github.com/jomjol/neural-network-analog-needle-readout/blob/master/Train_Network.md)
|
||||
23
docs/Best-Practice.md
Normal file
@@ -0,0 +1,23 @@
|
||||
This page shows some best practices.
|
||||
# Camera Placement
|
||||
* Move the Camera as close as possible(~4cm), this will help get rid of reflections.
|
||||
-> focus can be adjusted by turning the outer black ring of the camera.
|
||||
* If the LED reflections are too strong, put tape over the LED to defuse the light
|
||||
* Change the ImageSize to QVGA under "Expert mode" configuration when close enough, this will be faster and is often good enough for digital recognition.
|
||||
|
||||
# Reflections
|
||||
|
||||
* Try to get ride of the reflections by rotating the camera, so that the reflections are at positions, where no number is.
|
||||
* By using the external LED option, you can place WS2812 LEDs freely away from the main axis.
|
||||
* Users report, that a handy cover foil could also help
|
||||
|
||||
# Post-processing
|
||||
* Filter out the Number "9", as "3" will often be misread for a "9" and void every number between 3 and 9 due to it being negative flow.
|
||||
* Split the readings into two, while the decimal numbers might move to fast to be recognized, at least the slower moving part will produce a correct reading.
|
||||
-> keep in mind that the offset needs to be adjusted, a.e if you have a comma reading of "3", it needs to become "0.3". This can be done wherever the data ends up being sent, like home assistant using sensor templates.
|
||||
* If you are using a low resolution and only digital mode, processing can often be done in <1 minute. Check the logs to confirm how fast it is and then set the interval accordingly under "Expert mode" in configuration, as the normal mode will lock you to 3+ minutes.
|
||||
|
||||
***
|
||||
|
||||
* [ ] Todo condense from various discussions, eg. ~~https://github.com/jomjol/AI-on-the-edge-device/issues/765~~ and https://github.com/jomjol/AI-on-the-edge-device/discussions/984
|
||||
* [ ] Todo add images and more in-depth explanation
|
||||
30
docs/Build-Instructions.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# New
|
||||
See [README.md](https://github.com/jomjol/AI-on-the-edge-device/blob/master/code/README.md)
|
||||
|
||||
# Old
|
||||
|
||||
## Build the project yourself
|
||||
|
||||
- Download and install VS Code
|
||||
- https://code.visualstudio.com/Download
|
||||
- Install the VS Code platform io plugin
|
||||
- <img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/platformio_plugin.jpg" width="200" align="middle">
|
||||
- Check for error messages, maybe you need to manually add some python libraries
|
||||
- e.g. in my Ubuntu a python3-env was missing: `sudo apt-get install python3-venv`
|
||||
- git clone this project
|
||||
- in Linux: `git clone https://github.com/jomjol/AI-on-the-edge-device.git`
|
||||
- in VS code, open the `AI-on-the-edge-device/code`
|
||||
- from terminal: `cd AI-on-the-edge-device/code && code .`
|
||||
- open a pio terminal (click on the terminal sign in the bottom menu bar)
|
||||
- make sure you are in the `code` directory
|
||||
- To build, type `platformio run --environment esp32cam`
|
||||
- or use the graphical interface:
|
||||
<img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/platformio_build.jpg" width="200" align="middle">
|
||||
- the build artifacts are stored in `code/.pio/build/esp32cam/`
|
||||
- Connect the device and type `pio device monitor`. There you will see your device and can copy the name to the next instruction
|
||||
- Add `upload_port = you_device_port` to the `platformio.ini` file
|
||||
- make sure an sd card with the contents of the `sd_card` folder is inserted and you have changed the wifi details
|
||||
- `pio run --target erase` to erase the flash
|
||||
- `pio run --target upload` this will upload the `bootloader.bin, partitions.bin,firmware.bin` from the `code/.pio/build/esp32cam/` folder.
|
||||
- `pio device monitor` to observe the logs via uart
|
||||
|
||||
57
docs/Choosing-the-Model.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# Which model should I use?
|
||||
|
||||
In the [Graphical Configuration Page](Graphical-configuration), you can choose different models depending on your needs.
|
||||
|
||||
This wiki page tries to help you on which model to select.
|
||||
For more technical/deeper explanations have a look on [Neural-Network-Types](https://github.com/jomjol/AI-on-the-edge-device/wiki/Neural-Network-Types).
|
||||
|
||||
## Digit Models
|
||||
|
||||
For digits on water meters, gas-meters or power meters you can select between two main types of models.
|
||||
|
||||
### dig-class11
|
||||
|
||||
This model can recognize full digits. All intermediate states shown a "N" for not a number. But in post process it uses older values to fill up the "N" values if possible.
|
||||
|
||||
<img width="333" alt="image" src="https://user-images.githubusercontent.com/412645/190924459-e4023630-c6d0-4a8c-ab56-59e6c0e3ffd8.png">
|
||||
|
||||
#### Main features
|
||||
|
||||
* well suited for LCD digits
|
||||
* with the ExtendedResolution option is not supported. (Only in conjunction with ana-class100 / ana-cont)
|
||||
|
||||
|
||||
### dig-class100 / dig-cont
|
||||
|
||||
These models are used to get a continuous reading with intermediate states. To see what the models are doing, you can go to the Recognition page.
|
||||
|
||||
<img width="323" alt="image" src="https://user-images.githubusercontent.com/412645/190924335-b8b75883-7b39-4fd6-a949-49c69834fee4.png">
|
||||
|
||||
#### Main features
|
||||
|
||||
* suitable for all digit displays.
|
||||
* Advantage over dig-class11 that results continue to be calculated in the transition between digits.
|
||||
* With the ExtendedResolution option, higher accuracy is possible by adding another digit.
|
||||
|
||||
Look [here](https://jomjol.github.io/neural-network-digital-counter-readout) for a list of digit images used for the training
|
||||
|
||||
#### dig-class100 vs. dig-cont
|
||||
The difference is in the internal processing. Take the one that gives you the best results.
|
||||
|
||||
## Analog pointer models
|
||||
|
||||
### ana-class100 / ana-cont
|
||||
|
||||
For pointers on water meters use the analog models. You can only choose between ana-class100 and ana-cont. Both do mainly the same.
|
||||
|
||||
<img width="231" alt="image" src="https://user-images.githubusercontent.com/412645/190924487-18ed16e1-1c89-45f1-823e-305b7e78ac46.png">
|
||||
|
||||
#### Main features
|
||||
|
||||
* for all analogue pointers, especially for water meters.
|
||||
* With the ExtendedResolution option, higher accuracy is possible by adding another digit.
|
||||
|
||||
Look [here](https://jomjol.github.io/neural-network-analog-needle-readout/) for a list of pointer images used for the training
|
||||
|
||||
#### ana-class100 vs. ana-cont
|
||||
The difference is in the internal processing. Take the one that gives you the best results. Both models learn from the same data.
|
||||
210
docs/Configuration-Parameter-Details.md
Normal file
@@ -0,0 +1,210 @@
|
||||
# Configuration Parameter Details
|
||||
|
||||
### [MakeImage]
|
||||
|
||||
```
|
||||
[MakeImage]
|
||||
LogImageLocation = /log/source
|
||||
WaitBeforeTakingPicture = 5
|
||||
LogfileRetentionInDays = 15
|
||||
Brightness = -2
|
||||
;Contrast = 0
|
||||
;Saturation = 0
|
||||
ImageQuality = 5
|
||||
ImageSize = VGA
|
||||
FixedExposure = true
|
||||
```
|
||||
|
||||
|
||||
| Parameter | Meaning | Options/Examples |
|
||||
| ----------------------- | ------------------------------------------------------------ | ---------------- |
|
||||
| LogImageLocation | location for storing a copy of the image | |
|
||||
| WaitBeforeTakingPicture | waiting time between switch on the flash-light and taking the image in seconds | |
|
||||
| LogfileRetentionInDays | Number of days, for which the log files should be stored | 0 = keep forever |
|
||||
| Brightness | Adjustment of the camera brightness (-2 ... 2) | |
|
||||
| Contrast | NOT IMPLEMENTED | |
|
||||
| Saturation | NOT IMPLEMENTED | |
|
||||
| ImageQuality | Input image jpg-compression quality 0 (best) to 100 (lowest) | 5 = default |
|
||||
| ImageSize | Input Image Size from Camera | only VGA, QVGA |
|
||||
| FixedExposure | If enabled, the exposure settings are fixed at the beginning and the waiting time after switching on the illumination will be skipped | |
|
||||
|
||||
|
||||
|
||||
### [Alignment]
|
||||
|
||||
```
|
||||
[Alignment]
|
||||
InitialRotate = 179
|
||||
InitialMirror = false
|
||||
SearchFieldX = 20
|
||||
SearchFieldY = 20
|
||||
AlignmentAlgo = Default
|
||||
FlipImageSize = false
|
||||
/config/ref0.jpg 104 271
|
||||
/config/ref1.jpg 442 142
|
||||
```
|
||||
|
||||
| Parameter | Meaning | Options/Examples |
|
||||
| ------------------------ | ------------------------------------------------------------ | ------------------------------------- |
|
||||
| InitialMirror | Option for initially mirroring the image on the original x-axis | |
|
||||
| InitialRotate | Initial rotation of image before alignment in degree (1...359) | |
|
||||
| FlipImageSize | Changes the aspect ratio after the image rotation to avoid cropping of the rotated imaged | |
|
||||
| /config/refx.jpg 98, 257 | Link to reference image and corresponding target coordinates | file link is relative to sd-card root |
|
||||
| SearchFieldX/Y | Search field size in X/Y for finding the reference images [pixel] | |
|
||||
|
||||
Here two reference images are needed. Therefore rotation and shifting can be compensated. As the alignment is one of the most computing time using part, the search field needs to be limited. The calculation time goes quadratic with the search field size.
|
||||
|
||||
### [Digits]
|
||||
|
||||
```
|
||||
[Digits]
|
||||
Model=/config/digits.tfl
|
||||
ModelInputSize 20, 32
|
||||
LogImageLocation = /log/digit
|
||||
LogfileRetentionInDays = 2
|
||||
number1.digit1 292 120 37 67
|
||||
number1.digit2 340 120 37 67
|
||||
number1.digit3 389 120 37 67
|
||||
number2.digit1 292 180 37 67
|
||||
number2.digit2 340 180 37 67
|
||||
```
|
||||
|
||||
|
||||
| Parameter | Meaning | Options/Examples |
|
||||
| ---------------------- | ------------------------------------------------------------ | ------------------------------------------ |
|
||||
| Model | Link to the CNN-tflite file used for AI-image recognition | |
|
||||
| ModelInputSize | Image input size for the CNN-Network [pixel] | needed to resize the ROI to the input size |
|
||||
| LogImageLocation | storage location for the recognized images, including the CNN-results in the file name/location | |
|
||||
| numberX.digitY | ROI for the corresponding digit in the aligned image. <br />More than one number can be specified. Therefore the name consists of a naming of the number (`numberX`) and the region of interest (`digitY`) - separated by `.` | |
|
||||
| LogfileRetentionInDays | Number of days, for which the log files should be stored | 0 = keep forever |
|
||||
| LogImageLocation | location for storing a copy of the image | |
|
||||
|
||||
|
||||
### [Analog]
|
||||
|
||||
```
|
||||
[Analog]
|
||||
Model=/config/analog.tfl
|
||||
ModelInputSize 32, 32
|
||||
LogImageLocation = /log/analog
|
||||
LogfileRetentionInDays = 2
|
||||
number1.analog1, 433, 207, 99, 99
|
||||
number1.analog2, 378, 313, 99, 99
|
||||
number1.analog3, 280, 356, 99, 99
|
||||
number1.analog4, 149, 313, 99, 99
|
||||
number2.analog1, 280, 456, 99, 99
|
||||
number2.analog2, 149, 413, 99, 99
|
||||
```
|
||||
|
||||
Same as for [digit], here only for the analog pointers
|
||||
|
||||
### [PostProcessing]
|
||||
|
||||
```
|
||||
[PostProcessing]
|
||||
number1.DecimalShift = 0
|
||||
number2.DecimalShift = -1
|
||||
PreValueUse = true
|
||||
PreValueAgeStartup = 720
|
||||
AllowNegativeRates = false
|
||||
number1.MaxRateValue = 0.1
|
||||
number2.MaxRateValue = 0.1
|
||||
ErrorMessage = true
|
||||
CheckDigitIncreaseConsistency = false
|
||||
```
|
||||
|
||||
Here the post processing and consistency check for the readout can be adjusted
|
||||
|
||||
| Parameter | Meaning | Options/Examples |
|
||||
| ----------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
|
||||
| PreValueUse | Use the previous value for consistency check and substitution for NaN (True / False) | |
|
||||
| PreValueAgeStartup | Max age of PreValue after a reboot (downtime) | |
|
||||
| AllowNegativeRates | Allow decrease of the readout value | |
|
||||
| numberX.MaxRateValue | Maximum chance rate from one to the next readout.<br />This can be specified for each number individual. | |
|
||||
| ErrorMessage | Show error messages | |
|
||||
| numberX.DecimalShift | Shifting of the decimal separator from the default position between digital and analog.<br />This can be specified for each number individual. | DecimalShift = 2: 123.456 --> 12345.6<br />DecimalShift = -1: 123.456 --> 12.3456<br/> |
|
||||
| CheckDigitIncreaseConsistency | This parameter controls, if the digits are checked for a consistent change in comparison to the previous value. This only makes sense, if the last digit is changing very slowly and every single digit is visible (e.g. 4.7 --> 4.8 --> 4.9 --> 5.0 --> 5.1). If single digits are skipped, for example because the digits changes to fast, this should be disabled (e.g. 4.7 --> 5.0 --> 5.1). | |
|
||||
|
||||
### [MQTT]
|
||||
|
||||
```
|
||||
[MQTT]
|
||||
Uri = mqtt://IP-ADRESS:1883
|
||||
MainTopic = wasserzaehler
|
||||
ClientID = wasser
|
||||
user = USERNAME
|
||||
password = PASSWORD
|
||||
```
|
||||
|
||||
Here the post processing and consistency check for the readout can be adjusted
|
||||
|
||||
| Parameter | Meaning | Options/Examples |
|
||||
| --------- | ------------------------------------------------------------ | ---------------- |
|
||||
| Uri | URI to the MQTT broker including port e.g.: mqtt://IP-Address:Port | |
|
||||
| MainTopic | MQTT main topic, under which the counters are published. <br />The single value will be published with the following key: `MainTopic/number/PARAMETER` where parameters are: value, rate, timestamp, error and json<br/>The general connection status can be found in `MainTopic/connection` | |
|
||||
| ClientID | ClientID to connect to the MQTT broker | |
|
||||
| user | user for MQTT authentication | (optional) |
|
||||
| password | password for MQTT authentication | (optional) |
|
||||
|
||||
|
||||
### [AutoTimer]
|
||||
|
||||
```
|
||||
[AutoTimer]
|
||||
AutoStart= true
|
||||
Intervall = 4.85
|
||||
```
|
||||
|
||||
This paragraph is used to automatically trigger the periodic automated readout.
|
||||
|
||||
|
||||
|
||||
| Parameter | Meaning | Options/Examples |
|
||||
| --------- | ----------------------------------------------- | ------------------------------------------------------------ |
|
||||
| AutoStart | Automatically trigger the readout after startup | |
|
||||
| Intervall | Readout interval in minutes | Values smaller than 2 minutes do not make sense, as this is the time for one detection |
|
||||
|
||||
### [Debug]
|
||||
|
||||
```
|
||||
[Debug]
|
||||
Logfile = true
|
||||
LogfileRetentionInDays = 2
|
||||
```
|
||||
|
||||
This paragraph is used to switch on an extended logging. It is optional and by default only a minimum logging is enabled.
|
||||
**Attention:** in case of extended logging the size of the log file (`/log.txt`, `/alignment.txt`) might increase rapidly, therefore manually deletion from time to time is recommended
|
||||
|
||||
|
||||
|
||||
| Parameter | Meaning | Options/Examples |
|
||||
| ---------------------- | -------------------------------------------------------- | ------------------------------ |
|
||||
| Logfile | Turn on (true) or off (false) the extended logging | parameter and section optional |
|
||||
| LogfileRetentionInDays | Number of days, for which the log files should be stored | 0 = keep forever |
|
||||
|
||||
### [System]
|
||||
|
||||
```
|
||||
[System]
|
||||
TimeZone = CET-1CEST,M3.5.0,M10.5.0/3
|
||||
;TimeServer = TIMESERVER
|
||||
;Hostname = undefined
|
||||
SetupMode = false
|
||||
```
|
||||
|
||||
This paragraph is used to switch on an extended logging. It is optional and by default only a minimum logging is enabled.
|
||||
**Attention:** in case of extended logging the size of the log file (`/log.txt`, `/alignment.txt`) might increase rapidly, therefore manually deletion from time to time is recommended
|
||||
|
||||
|
||||
|
||||
| Parameter | Meaning | Options/Examples |
|
||||
| ---------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
|
||||
| TimeZone | TimeZone of the system can be specified | Central european, with summertime adjustement: `CET-1CEST,M3.5.0,M10.5.0/3` |
|
||||
| TimeServer | An dedicated time server can be specified. | default = `pool.ntp.org` |
|
||||
| Hostname | Additionally to the `wlan.ini` the hostname can be specified. It will be transferred to the `wlan.ini` and initiate a reboot | |
|
||||
| SetupMode | If enabled, the server starts in an initial setup mode. This is automatically disabled at the end of the setup | |
|
||||
|
||||
|
||||
### [Ende]
|
||||
|
||||
No function, just to mark, that the config is done!
|
||||
74
docs/Configuration.md
Normal file
@@ -0,0 +1,74 @@
|
||||
Most of the settings can be modified with the help of a web based [graphical user interface](Graphical-configuration). This is hosted through the web server on the ESP32.
|
||||
|
||||
More configuration parameters can be edited by hand in the `config.ini` and corresponding files in the `/config` directory on the SD-card.
|
||||
|
||||
|
||||
|
||||
If you where using the Version 1 of the watermeter you can easily transfer the configuration to the new system by following the steps in this [migration description](MigrateOldConfigToNew.md)
|
||||
|
||||
|
||||
|
||||
## Processing / Config.ini principle
|
||||
|
||||
The principle is very simple and can most easily be described as a flow of processing steps. Each step has a dedicated parameter description in the ``config.ini``, which is indicated by brackets ```[name_of_step]```. The steps are processed in the order written in the config file. That means, that you first have to describe the image taking, then the aligning and cutting and only after that you can start to config a neural network. The last step is the post processing.
|
||||
|
||||
### Processing steps - Overview
|
||||
|
||||
In the following you get a short overview over the available steps. This order is also the suggested order for the processing flow. Single steps can be left out, if not needed (e.g. omit the analog part, if only digits are present)
|
||||
|
||||
#### 1. ``[MakeImage]``
|
||||
|
||||
* This steps parametrises the taking of the image by the ESP32-CAM. Size, quality and storage for logging and debugging can be set.
|
||||
|
||||
#### 2. ``[Alignment]``
|
||||
* Image preprocessing, including image alignment with reference images
|
||||
|
||||
#### 3. ``[Digits]``
|
||||
|
||||
* Neural network evaluation of an image for digits. The neural network is defined by a tflite formatted file and the output is a number between 0 .. 9 or NaN (if image is not unique enough)
|
||||
|
||||
#### 4. ``[Analog]``
|
||||
- Neural network evaluation of analog counter. The neural network is defined by a tflite formatted file and the output is a number between 0.0 .. 9.9, representing the position of the pointer.
|
||||
|
||||
|
||||
#### 5. ``[PostProcessing]``
|
||||
- Summarized the individually converted pictures to the overall result. It also implements some error corrections and consistency checks to filter wrong reading.
|
||||
|
||||
#### 6. ``[MQTT]``
|
||||
|
||||
- Transfer of the readings to a MQTT server.
|
||||
|
||||
|
||||
#### 7. ``[AutoTimer]``
|
||||
- Configuration of the automated flow start at the start up of the ESP32.
|
||||
|
||||
#### 8. ``[Debug]``
|
||||
- Configuration for debugging details
|
||||
|
||||
#### 9. ``[Ende]``
|
||||
- No meaning, just an additional indication, that the configuration is finished.
|
||||
|
||||
|
||||
|
||||
**A detailed parameter description can be found here: [[Configuration Parameter Details]].**
|
||||
|
||||
|
||||
|
||||
## Graphical configuration interface
|
||||
|
||||
It is recommended to do the configuration of the alignment structures and ROIs through the graphical user interface. A step by step instruction can be found here: [[Graphical Configuration]]
|
||||
|
||||
|
||||
|
||||
## Background for Image Alignment
|
||||
|
||||
Details on the image recognition flow can be found in the other project here: https://github.com/jomjol/water-meter-system-complete/blob/master/images/Alignment_procedure_draft.pdf
|
||||
|
||||
The ```config.ini``` here has the same functionality and options, but a slightly different syntax due to a own written ini-parser is used. Migration see [here](MigrateOldConfigToNew.md).
|
||||
|
||||
|
||||
|
||||
### Integration into Home Assistant
|
||||
|
||||
Thanks to the help of the user @deadly667 here are some hints for the integration into the home assistant: [[Integration-Home-Assistant]]
|
||||
|
||||
74
docs/Correction Algorithm.md
Normal file
@@ -0,0 +1,74 @@
|
||||
# Correction Algorithm
|
||||
|
||||
After the digitization of the images and the composition to a number a checking and correction algorithm is applied. This is explained here.
|
||||
|
||||
There are several reasons, that a check might be necessary:
|
||||
|
||||
1. In case of digits there is the output of "N" (=NaN = Not-a-Number) in case the digit cannot be detected correctly. This happens for example if the image shows a digit between to states
|
||||
2. The replacement of the "N" with a previous value could be not sufficient, due to the fact, that it might have changed.
|
||||
3. There is a misreading of one one of the numbers. This can always happen in case of neural network processing.
|
||||
|
||||
|
||||
|
||||
### Terms and definitions
|
||||
|
||||
##### PreValue
|
||||
|
||||
The last correct read value. Either from a previous correctly identified value or manual setting by the user.
|
||||
|
||||
This is used to replace "N"s and make a check for the absolute change.
|
||||
|
||||
|
||||
##### Digits
|
||||
|
||||
Value that are digitized from a digital number. There are 11 allowed values for this:
|
||||
|
||||
1. Digits: 0, 1, 2, ... 9
|
||||
2. N = Not-a-Number - representing a not unique state between two numbers
|
||||
|
||||
##### Analogs
|
||||
|
||||
This are value derived from a pointer like meter. This never has the state "N".
|
||||
|
||||
##### CheckDigitIncreaseConsistency
|
||||
|
||||
If this is enabled an "inteligent" algorithm is used to derive from zero-crossing of discrete digit positions, if the number should have been increased. This is relevant because in some of the digit meters, the increase of a digit to the next number can be seen, before the subdigit has gone through zero.
|
||||
|
||||
For example: 16.6 --> 16.7 --> 1N.8 --> **17.9** corrected to 16.9 --> 17.0 --> 17.1
|
||||
|
||||
As you can see, the 17.9 is a false reading as the 7 is assumed to be already readable, although the subdigit has not crossed the zero. In this case the CheckDigitIncreaseConsistency algorithm will correct this to 16.9
|
||||
|
||||
A detailed description of the algorithm can be found below (not yet ready!)
|
||||
|
||||
##### Negative Rate allowed
|
||||
|
||||
Most of the meters only have increasing numbers and do not count backwards. Therefore a negative rate (= negative change compared to the PreValue) is surely a false value. This can be checked an flagged as false reading
|
||||
|
||||
##### MaxRateValue / MaxRateType
|
||||
|
||||
Here the maximum change from one to the next reading can be limited. If a false reading of the neural network results in a change larger than this, the reading is flagged as false. There a two types of comparisons possible
|
||||
|
||||
1) **AbsolutChange**: Here the difference between the PreValue and the current reading is compared directly, independent how much time has passed since the last reading.
|
||||
2) **RelativeRate**: in this case a change rate in the unit of change/minute is calculated, taking the time between the last and the current reading into account. Be careful, that with increasing time, the absolute allowed change increases.
|
||||
Example: relative rate of 0.05 m³/minute --> after 20 minutes a maximum change of 20 minutes * 0.05 m³/minute = 1 m³ is possible. That means that a false reading of 1 m³ cannot be detected false after about 20 minutes in this case
|
||||
Assume, that there might me no change in the meter for hours (e.g. during the night) a much bigger change could also be accepted.
|
||||
|
||||
|
||||
|
||||
#### Flow Chart
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## CheckDigitIncreaseConsistency Algorithm
|
||||
|
||||
The check digit increase consistency algorithm is functional for the digits only. Due to the fact, that the rotation might be a little bit earlier or later compared to the zero crossing of the digit before, errors during the reading before and after a zero crossing can be wrong. Therefore a simple algorithm can be applied, checking the consistency of zero crossing and changes in the following digit. This is applied to one after the other digit, starting with the lowest priority digits.
|
||||
|
||||

|
||||
72
docs/Demo-Mode.md
Normal file
@@ -0,0 +1,72 @@
|
||||
For Demo and Testing Purpose, the device can use pre-recorded images.
|
||||
|
||||
You need to enable it in the configuration (`TakeImage > Demo`) and also provide the needed files on the SD-Card.
|
||||
|
||||
One image per round gets used, starting with the first image for the first round.
|
||||
|
||||
For the reference image and the alignment also the first image gets used.
|
||||
|
||||
Once the last image got reached, it starts again with the first one.
|
||||
|
||||
## Example Demo
|
||||
You can use the following demo or create your own one.
|
||||
Just install it using the OTA Update functionality.
|
||||
|
||||
- [demo.zip](https://github.com/jomjol/AI-on-the-edge-device/files/10320454/demo.zip) (this is just a zip of [this](https://github.com/jomjol/AI-on-the-edge-device/tree/master/code) folder in the repo)
|
||||
|
||||
|
||||
## SD-Card Structure
|
||||
```
|
||||
demo/
|
||||
├── 520.8983.jpg
|
||||
├── 520.9086.jpg
|
||||
├── 520.9351.jpg
|
||||
├── ...
|
||||
└── files.txt
|
||||
```
|
||||
|
||||
- The jpg files can have any name
|
||||
- The jpg files must be smaller than 30'000 bytes
|
||||
- The `files.txt` must contains a list of those files, eg:
|
||||
```
|
||||
520.8983.jpg
|
||||
520.9086.jpg
|
||||
520.9351.jpg
|
||||
```
|
||||
|
||||
## Recoding
|
||||
To record real images of a meter, you have to periodically fetch `http://<IP>/img_tmp/raw.jpg`.
|
||||
|
||||
To automate this, you can use the following shell script (Linux only):
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
while [[ true ]]; do
|
||||
echo "fetching value..."
|
||||
wget -q http://192.168.1.151/value -O value.txt
|
||||
|
||||
value=`cat value.txt`
|
||||
echo "Value: $value"
|
||||
|
||||
diff=`diff value.txt value_previous.txt`
|
||||
changed=$?
|
||||
#echo "Diff: $diff"
|
||||
|
||||
if [[ $changed -ne 0 ]]; then
|
||||
echo "Value changed:"
|
||||
echo $diff
|
||||
echo "fetching image..."
|
||||
wget -q http://192.168.1.151/img_tmp/raw.jpg -O $value.jpg
|
||||
else
|
||||
echo "Value did not change, skipping image fetching!"
|
||||
fi
|
||||
|
||||
cp value.txt value_previous.txt
|
||||
|
||||
echo "waiting 60s..."
|
||||
sleep 60
|
||||
done
|
||||
```
|
||||
|
||||
## How does it work
|
||||
The Demo Mode tries to interfere as less as possible with the normal behavior. Whenever a Cam Framebuffer gets taken (`esp_camera_fb_get()`), it replaces the framebuffer with the image from the SD-Card.
|
||||
37
docs/Error-Codes.md
Normal file
@@ -0,0 +1,37 @@
|
||||
This page lists the possible error codes, their meaning and possible solutions.
|
||||
|
||||
The effective error codes can be found [here](https://github.com/jomjol/AI-on-the-edge-device/blob/rolling/code/components/jomjol_helper/Helper.h).
|
||||
|
||||
# Critical Errors
|
||||
Those Errors make the normal operation of the device impossible.
|
||||
Most likely they are caused by a hardware issue!
|
||||
|
||||
## `0x00000001` PSRAM bad
|
||||
Your device most likely has no PSRAM at all or it is too small (needs to have at least 4 MBytes)!
|
||||
See https://github.com/jomjol/AI-on-the-edge-device/wiki/Hardware-Compatibility
|
||||
Usually the log shows something like this:
|
||||
```
|
||||
psram: PSRAM ID read error: 0xffffffff
|
||||
cpu_start: Failed to init external RAM!
|
||||
```
|
||||
|
||||
## `0x00000002` Heap too small
|
||||
The firmware failed to allocate enough memory. This most likely is a consequential error of a bad PSRAM!
|
||||
|
||||
## `0x00000004` Cam bad
|
||||
The attached camera can not be initialized.
|
||||
This usually is because on of the following reasons:
|
||||
- The camera is not supported, see https://github.com/jomjol/AI-on-the-edge-device/wiki/Hardware-Compatibility
|
||||
- The camera is not attached properly -> Try to remove and attach it again. Make sure you move the black part enough into the socket!
|
||||
- The camera or the camera cable is damaged
|
||||
|
||||
# Non-Critical Errors
|
||||
Those Errors can be caused by an error during initialization. It is possible that the error has no impact at all or that a reboot solves it.
|
||||
|
||||
## `0x00000100` Cam Framebuffer bad
|
||||
The firmware was unable to initialize the Camera Framebuffer.
|
||||
The firmware will continue to work, but other consequential error might arise.
|
||||
A reboot of the device might help.
|
||||
|
||||
## `0x00000200` NTP failed
|
||||
The firmware failed to get the world time from an NTP server. The firmware will continue to work, but has a wrong time.
|
||||
62
docs/Error-Debugging.md
Normal file
@@ -0,0 +1,62 @@
|
||||
# Error Debugging
|
||||
|
||||
## Rebooting
|
||||
|
||||
##### General Remark
|
||||
|
||||
1. Due to the rather complex code with a lot of external libraries and the limited availability of memory a reboot of the device from time to time is "normal". Background are memory leakages and therefore running out of free memory.
|
||||
|
||||
2. The hardware of the ESP32CAM has a varying quality. I have one and the same hardware with a reboot range from every 5 detection runs to up to 250 detection runs.
|
||||
|
||||
##### Getting deeper inside
|
||||
|
||||
Have a look into the log file (``/log/message/...``).
|
||||
|
||||
* If the log file is very short you need to enable a enhanced logging in the ``config.ini`` (Debug --> ``logfile = true``) .
|
||||
|
||||
|
||||
|
||||
Analyze the debugging output of the serial interface
|
||||
|
||||
* Connect a serial to USB interface (like for flashing) and make a logging of the serial communication
|
||||
* There are a lot more intermediate information and the lines before the reboot tell you, where the firmware fails
|
||||
|
||||
|
||||
|
||||
**If you make an issue about this, please post these two information additionally**
|
||||
|
||||
**Don't forget to remove your WLAN password in the serial log**
|
||||
|
||||
|
||||
|
||||
## Often observed problems
|
||||
|
||||
### Hardware failure
|
||||
* Camera not working --> check the interface, test another module
|
||||
* Low cost module with only 2MB of PSRAM instead of 4MB --> image taking will fail first. This will never work due to too low memory
|
||||
|
||||
### ROI misaligned
|
||||
|
||||
<img src="https://user-images.githubusercontent.com/108122193/188264361-0f5038ce-d827-4096-93fb-5907d3b072b4.png" width=30% height=30%>
|
||||
|
||||
This typically happens if you have suboptimal "Alignement Marks". A very simple and working solution is to put put higly contrasted stickers on your meter and put "Alignement Marks" on it (see picture below)
|
||||
|
||||
<img src="https://user-images.githubusercontent.com/108122193/188264752-c0f2a2be-0c22-40de-afaf-fd55b2eb4182.png" width=30% height=30%>
|
||||
|
||||
If after those adjustement you still have some issues, you can try to adjust your aligmenet settings in expert mode:
|
||||
<img src="https://user-images.githubusercontent.com/108122193/188382213-68c4a015-6582-4911-81bc-cdce8ef60ed2.png" width=75% height=75%>
|
||||
|
||||
|
||||
### My Analog Meter are recognized as Digital Counter or vice versa
|
||||
|
||||
<img src="https://user-images.githubusercontent.com/108122193/188265470-001a392f-d1f4-46a3-b1e8-f29ec41c8621.png" width=40% height=40%>
|
||||
|
||||
|
||||
1. First, check that your ROI are correctly defined (yey!)
|
||||
2. Second, verify that the name of your ROI analog and digital ROIs are different
|
||||
|
||||
### Recognition is working well, but number aren't sorted correctly
|
||||
|
||||
You have to sort your ROI correctly (Bigger to smaller). Select your ROI and click either "move next" or "move previous". Repeat until your ROI are correctly sorted
|
||||
|
||||
<img src="https://user-images.githubusercontent.com/108122193/188264916-03befff1-4e61-4370-bd5a-9168a88c57f2.png" width=50% height=50%>
|
||||
50
docs/External-LED.md
Normal file
@@ -0,0 +1,50 @@
|
||||
## External LED
|
||||
|
||||
The internal flash LED is very close to the camera axis. This results in reflection, especially in case of flat glass surfaces such as for power meters.
|
||||
To circumvent this problem, it is now possible to control external LEDs, which than can be places somewhere else in the setup. As not simples LEDs are used, but RGB leds with a digital interface like WS2812 not only the position, but also the color and intensity of the illumination can now be adjusted. The following image shows a direct comparision of the "old" internal flash LED and two off axis LEDs.
|
||||
|
||||
<img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/intern_vs_external.jpg" width="700">
|
||||
|
||||
|
||||
|
||||
There is also a new [meter adapter](https://www.thingiverse.com/thing:5028229) available. This has two features: designed for **small clearings** in front of the meter and prepared for **WS2812 LEDs**.
|
||||
|
||||
|
||||
|
||||
<img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/Power_Meter_Mounted.jpg" width="500">
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
#### 1. Hardware installation of the LED stripe
|
||||
|
||||
The control line of the LED stripe is connected with a 470 Ohm resistor to the GPIO12.
|
||||
For power supply stabilization a capacitor between 5V and ground is recommended. Here a 470µF polymer capacitor is used. As a power supply a 5V from the ESP32 is used like in the following wiring.
|
||||
|
||||
|
||||
|
||||
<img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/install_external_led.jpg" width="500">
|
||||
|
||||
|
||||
|
||||
#### 2. Software configuration
|
||||
|
||||
The handling of the WS2812 LED controller needs some other libraries, therefore it is controlled within a dedicated section called ``GPIO Settings``. The external LED stripe is connected to GPIO12. After activating the "GPIO Settings" section, the internal flash is per default disabled. In order to activate the external LED, you need to activate ``GPIO 12 state`` and select ``"extern flash light ws281x ..."``.
|
||||
|
||||
|
||||
|
||||
<img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/external_GPIO_settings.jpg" width="700">
|
||||
|
||||
|
||||
|
||||
|
||||
| Parameter | Meaning |
|
||||
| -------------- | ------------------------------------------------------------ |
|
||||
| LED-Type | There are several types of controller implemented: WS2812(B), WS2813, SK6812 |
|
||||
| Numbers of LED | Number of LEDs on the LED stripe |
|
||||
| LED Color | The color and intensity can be controlled directly by a red/green/blue value, each within the range from 0 (off) to 255 (full) |
|
||||
|
||||
|
||||
|
||||
Enabling the GPIO settings automatically disables the flash LED. Therefore you can enable it here manually by checking GPIO4 and choose ``"build-in led flash light"``. It is not recommended to use both illumination parallel.
|
||||
63
docs/FAQs.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Frequently Asked Questions
|
||||
|
||||
#### My device is reboot frequently. What can I do?
|
||||
|
||||
There are several reasons for the reboot:
|
||||
|
||||
* Frequent HTML requests
|
||||
* Wrong configuration, missing configuration files
|
||||
* Unstable hardware - see [[Hardware Compatibility]]
|
||||
|
||||
There is a dedicated Wiki page about this: [[Frequent Reboots]]
|
||||
|
||||
|
||||
#### How accurate are the detections?
|
||||
|
||||
It is hard to give a specific accuracy number. It depends on many factors, e.g.
|
||||
|
||||
* How in-focus is your camera?
|
||||
* How sturdy is the camera mount? Does it slightly move over extended periods of time?
|
||||
* What type of meter are you reading? Is the meter already in the training data set?
|
||||
* Are you trying to read digits, an analog dial, or both?
|
||||
* etc.
|
||||
|
||||
Anecdotally, the authors of this wiki have great success with the meter. While the AI algorithm itself is not perfect and sometimes returns `NaN` or incorrect values, other post-processing / prevalue / sanity checks help ensure such invalid values are filtered out. With the correct settings, one author has been running this device for 1 month without any incorrect values reported.
|
||||
|
||||
See the FAQs below for more details and configuration hints.
|
||||
|
||||
|
||||
#### My numbers are not corrected detected. What can I do?
|
||||
|
||||
* There is a dedicated Wiki page about the correct setting [[ROI Configuration]]
|
||||
* This page also includes the instructions for gathering new images for the training.
|
||||
|
||||
#### How can I ensure invalid numbers are never reported?
|
||||
|
||||
As mentioned above, the AI algorithm is not perfect. Sometimes it may read an incorrect value.
|
||||
|
||||
We can tune the software to _almost_ never report an incorrect value. There is a tradeoff though: the software may report _stale_ values - i.e. it will drop incorrect values for a potentially long period of time, resulting in the meter reading being outdated by hours. If never receiving an incorrect value is important to you, consider tolerating this tradeoff.
|
||||
|
||||
You can change the following settings to reduce incorrect readings (but potentially increase staleness of data):
|
||||
* Set a prevalue via the UI, then change `PostProcessing` configuration option `PreValueAgeStartup` to a much larger number (e.g. `43200` = 30 days).
|
||||
* Change `PostProcessing` configuration option `MaxRateType` to be time based instead of absolute. Set `MaxRateValue` to something realistic (e.g. `5` gal/min). You can often find the max flow rate your meter supports directly on the cover.
|
||||
* Reduce `AutoTimer` configuration option `Intervall` to the lowest it can be (e.g. `3` min). The more often you take readings, the less likely for data staleness to occur.
|
||||
|
||||
#### Even after I have setup everything perfect there is a false reading - especially around the zero crossing (roll over to next number)
|
||||
* The roll over behavior is different for the different meters. E.g.:
|
||||
* Rolling over start with different previous position (e.g. at 7, 8 or 9)
|
||||
* The neutral position (no rolling) is not perfectly at zero, but rather at something like 7.9 or 8.1, even if it should be exactly 8
|
||||
|
||||
* The "PostProcessingAlgo" is trying to judge out of the individual readings, what number it should be.
|
||||
* For example if the previous number is a "1", but the next number seems to be a "8.9", mos probably there was a "zero crossing" and the number is a "9" and not still an "8"
|
||||
|
||||
* Currently the setting of the algorithm is set to fit most of the meters and cases. But the parameters do not fit perfectly for all situations. Therefore there might be intermediate states, where the reading is false.
|
||||
This is especially the case, at the positions, where the roll over (zero crossing) is just starting.
|
||||
* To prevent a sending of false parameters, there is the possibility to limit the maximum allowed change (MaxRateChange).
|
||||
Usually after some time and movement of the counters a bit further, the reading is getting back to a stable reading.
|
||||
* To handle this, a parametrized setting would be needed. This is rather complicated to implement as subtle changes make a relevant difference. Currently this is not implemented.
|
||||
So please be a bit patient with your meter :-)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
131
docs/Frequent Reboots.md
Normal file
@@ -0,0 +1,131 @@
|
||||
# Frequent reboots
|
||||
|
||||
|
||||
|
||||
There are several types of reboots. To get a deeper insight turn on the logging:
|
||||
|
||||
1. Internal logging (`config.ini`)
|
||||
2. Serial log of the UART interface (same as for flashing the firmware)
|
||||
|
||||
|
||||
|
||||
There are two principle types of reboots
|
||||
|
||||
1. Random reboots (always different timing and situation)
|
||||
2. Permanent Reboots always at the same time
|
||||
|
||||
|
||||
|
||||
______
|
||||
|
||||
### Random reboots
|
||||
|
||||
Random reboots have two reasons: overload during HTML access and unstable system
|
||||
|
||||
In general: there are several mechanisms in the firmware (like saving previous values), to have a "smooth" reboot without too many notable disturbance.
|
||||
|
||||
##### Overload during HTML access
|
||||
|
||||
If you frequently access the web server over HTML requests, the firmware tends to reboot. This especially happens during the first run and when the ESP32 is busy with the digitization flow.
|
||||
|
||||
The reason for this are running out of memory during a flow, minor memory leakage in combination with missing error handling.
|
||||
|
||||
There is noting you can do about this kind of reboot, beside two thing:
|
||||
|
||||
1. Support the firmware development with improved and tested part of code
|
||||
2. Be patient :-)
|
||||
|
||||
##### Unstable system
|
||||
|
||||
If your system is sometimes running smoothly over several runs and sometimes reboots obviously randomly, you have an partially unstable device.
|
||||
|
||||
You can check this in the standard log file on the SD card:
|
||||
|
||||
```
|
||||
2021-12-26T06:34:09: task_autodoFlow - round done
|
||||
2021-12-26T06:34:09: CPU Temperature: 56.1
|
||||
2021-12-26T06:38:00: task_autodoFlow - next round - Round #23
|
||||
```
|
||||
|
||||
Here you see, that the round #23 is starting, so obviously there were no reboots in the last 22 rounds. There is hardware (ESP32CAM), where only 2-3 stable rounds are possible and others, where way more than 100 rounds without any reboots is possible.
|
||||
There is noting you can do about it, beside testing different hardware.
|
||||
|
||||
|
||||
|
||||
______
|
||||
|
||||
### Permanent reboots
|
||||
|
||||
Permanent reboots at the same situation during the flow has a systematic problem either in the hardware or the configuration. It usually happens during the first run as there all needed parts of the firmware have been loaded for the first time.
|
||||
|
||||
To find the reason mostly the serial log of the UART interface from the startup until the reboots is very helpful. It can be stored using the USB / UART interface - the same as for flashing the firmware - and logging the serial output of the ESP32.
|
||||
|
||||
Possible problems:
|
||||
|
||||
* SD card
|
||||
|
||||
* PSRAM too low
|
||||
* Configuration missing
|
||||
|
||||
##### SD card problems
|
||||
|
||||
The ESP32CAM is a little bit "picky" with the supported SD cards. Due to the limited availability of GPIOs the SD card can only be accessed via 1-wire mode. Therefore not all SD cards are supported. Several error cases can happen:
|
||||
|
||||
###### No SD card
|
||||
|
||||
Easy to detect: fast blinking red LED directly after startup, no reaction of the web server etc. at all
|
||||
|
||||
###### SD card not supported at all
|
||||
|
||||
Error message of no detectable SC card in the log file. **Normal looking** log for a 16GB SD card is like this:
|
||||
|
||||
```09:38:25.037 -> [0;32mI (4789) main: Using SDMMC peripheral[0m
|
||||
09:38:25.037 -> [0;32mI (4789) main: Using SDMMC peripheral[0m
|
||||
09:38:25.138 -> Name: SC16G
|
||||
09:38:25.138 -> Type: SDHC/SDXC
|
||||
09:38:25.138 -> Speed: 20 MHz
|
||||
09:38:25.138 -> Size: 15193MB
|
||||
```
|
||||
|
||||
Otherwise there is some error message.
|
||||
|
||||
###### SD card recognized but not supported
|
||||
|
||||
This is the most annoying error. The SD card is detected, but the files cannot be read. Most probably this results in a problem with the WLAN connection, as the first file needed is the `wlan.ini` in the root directory.
|
||||
|
||||
|
||||
|
||||
##### PSRAM too low
|
||||
|
||||
In order to work, there are 4 MB of PSRAM necessary. Normaly the ESP32CAM is equiped with 8 MB, whereof only 4 MB can be used effectively.
|
||||
Sometimes, there is hardware, where only 2 MB of PSRAM is present - **even if you have bought a 8 MB module**
|
||||
|
||||
You can identify the amount of PSRAM in the serial log file:
|
||||
|
||||
```
|
||||
09:38:21.224 -> [0;32mI (881) psram: This chip is ESP32-D0WD[0m
|
||||
09:38:21.224 -> [0;32mI (885) spiram: Found 64MBit SPI RAM device[0m
|
||||
09:38:21.224 -> [0;32mI (890) spiram: SPI RAM mode: flash 40m sram 40m[0m
|
||||
09:38:21.224 -> [0;32mI (895) spiram: PSRAM initialized, cache is in low/high (2-core) mode.[0m
|
||||
```
|
||||
|
||||
Here you see 64MBit (= 8MByte) - which is okay. False reading would be: 16MBit
|
||||
|
||||
The error in the SD log file is typically related with the taking of the image (tbd) as the first time, the system is running out of memory is usually, when it tries to transfer an image from the camera to the PSRAM.
|
||||
|
||||
There is nothing to do, than to buy a new ESP32CAM with **really** 64MBit of PSRAM.
|
||||
|
||||
|
||||
|
||||
##### Configuration missing
|
||||
|
||||
There are several files needed during on run cycle. If one of this is missing, the firmware is missing information and tends to reboot due to missing error management:
|
||||
|
||||
* `/wlan.ini`
|
||||
|
||||
* `/config/config.ini`
|
||||
|
||||
* `/config/XXXXX.tflite` (1 time for analog and 1 time for digital)
|
||||
|
||||
where XXXXX is the file name, that is written in the `config.ini`
|
||||
|
||||
20
docs/Gasmeter-Log-Downloader.md
Normal file
@@ -0,0 +1,20 @@
|
||||
## **Gasmeter Log-Downloader**
|
||||
|
||||
This small tool downloads the logfiles from your ESP32 and stores the last value of the day in an *.csv file.
|
||||
|
||||
To use this tool you need to **activate the debug logfile** in your configuration (Configuration / Debug / Logfile). I go with 30 days of retention in days.
|
||||
|
||||
It downloads only the past logfiles (yesterday and older).
|
||||
|
||||
You can define the max. number of Logfiles to download (beginning from newest [yesterday]).
|
||||
|
||||
I wrote this tool to get a chart of the daily gas consumption to optimize my gas powered heating.
|
||||
|
||||
**Variables to define by yourself:**
|
||||
|
||||
- **URL to Logfile-Path on Device:** "http://ESP32-IP-Address/fileserver/log/message/"
|
||||
- **Download Logfiles to:** enter a valid directory, e.g. "D:\Gaszaehler\Auswertung\Log-Downloads\"
|
||||
- **Output CSV-File:** enter a valid directory, e.g. "D:\Gaszaehler\Auswertung\DailyValues.csv"
|
||||
- **Download Logfiles from past # days:** enter the max. number of logfiles you want to download (<= your logfile retention value in your device configuration)
|
||||
|
||||
Feel free to optimize and modify it.
|
||||
114
docs/Graphical-configuration.md
Normal file
@@ -0,0 +1,114 @@
|
||||
# Graphical configuration
|
||||
|
||||
### **General remark:**
|
||||
|
||||
- to activate the changes, currently the device needs a restart after saving the changes.
|
||||
|
||||
- partially the commands needs processing on the ESP32 device. This is not very fast - so please be patient.
|
||||
|
||||
- too frequent http-request could result in a reboot of the ESP32 - normally this is not a problem as the server react about 30s later normally.
|
||||
|
||||
|
||||
|
||||
## Access to the graphical user interface
|
||||
|
||||
The graphical configuration mode can be reached via the "Edit Configuration" button in the main menue (`/index.html`):
|
||||
|
||||
* <img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/config_s1_access.jpg" width="600" align="middle">
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## Overview function
|
||||
|
||||
* <img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/config_menue_overview.jpg" width="600" align="middle">
|
||||
|
||||
1. Direct edit `config.ini` in text editor
|
||||
2. Configuration of image alignment
|
||||
a. Create of reference image
|
||||
b. Define alignment structures
|
||||
3. Definition of ROIs for digits and analog pointers
|
||||
4. Test the settings
|
||||
5. Back to main menue ("index.html")
|
||||
|
||||
|
||||
|
||||
### 1. Edit Config.ini
|
||||
|
||||
This is a text editor for the config.ini. Changes commited with the button on the lower left.
|
||||
|
||||
* <img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/config_s2_edit_config.jpg" width="600" align="middle">
|
||||
|
||||
|
||||
|
||||
### 2a. Create Reference Image
|
||||
|
||||
The reference image is the basis for the coordination of the ROIs. Therefore it is very important, to have a well aligned image, that is not rotated.
|
||||
|
||||
**Attention:** Updating the reference image, also means, that all alignment images and ROIs needs to be teached again. Therefore do this step only with caution.
|
||||
|
||||
* <img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/config_s3_reference.jpg" width="400" align="middle">
|
||||
|
||||
At first the current image is shown and no adjustment is possible. To reload the actual image push the button "Show actual Reference" (1). To define a new reference image push the button "Create new Reference" (2).
|
||||
Then the last taken raw image from the camera is loaded. If you want to update this, you can push the button "Make new raw image (raw.jpg)". If you need to mirror your image (e.g. mirror before camera) you can do this by selecting "mirror image". After loading the mirroring (in case checked) and the prerotation angle from the `config.ini` are applied. Then use the rough and fine adjustment to get the image straight aligned (3).
|
||||
If everything is done, you can save the result with "Update Reference Image" (4).
|
||||
|
||||
If you have problems with reflections, you can turn the camera in a positions, where the reflection is at a position, where no important information is. To reduce the intensity of the reflection you can also a peace of felt ("Filz") as diffusior at the LED.
|
||||
|
||||
|
||||
|
||||
### 2b. Define Alignment References
|
||||
|
||||
The alignment references are used to realign every taken image to the reference coordinates. Therefore two alignment structures are identified and the image is shifted and rotated according to their position with the target to be in exactly the same position as the reference image.
|
||||
|
||||
* <img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/config_s4_alignment.jpg" width="400" align="middle">
|
||||
|
||||
The alignment structures needs to be unique and have a good contrast. As this is the most calculation intensive process, only a field of view of 40x40 pixels around the original coordinates are scanned. This can be adjusted manually in the `config.ini`(Parameter: `SearchFieldX` / `SearchFieldY`).
|
||||
|
||||
In the upper part of the settings you can control the position and size of the selected reference image. You can define the ROI in the image directly via drag and drop with the mouse. Go to the starting point, push the left mouse button and drag your ROI. You will a red rectangle with the newly selected position. To make this active, you need to push "Update Reference" (2).
|
||||
You can change between the two reference images with the drop down box ("Reference 0", "Reference 1").
|
||||
|
||||
In some cases it might be useful to use a reference with a higher contrast. This can be achieved by pushing "Enhance Contrast" (3). The result will be calculated on the ESP32 - so be a bit patient, before you see it active.
|
||||
|
||||
To save the modified reference to the `config.ini`push finally "Save to config.ini".
|
||||
|
||||
|
||||
|
||||
### 3a./3b. Define ROIs for image recognition
|
||||
|
||||
Here the regions of interest for the digital and analog pointers are defined. As both are done identically, here as an example the digital images are shown.
|
||||
|
||||
|
||||
|
||||
* <img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/config_s5_ROIs.jpg" width="400" align="middle">
|
||||
|
||||
|
||||
|
||||
First of all you can define more than one number, for example in a dual meter counter. This can be done with defining a "Number" (1). Analog and digital ROIs belonging to the same "Number" are considered to be part of the same counter.
|
||||
|
||||
As for the reference images you can change position, size and name of the ROI in the text fields or define them via drag and drop through the mouse button. You can iterate through the defined ROIs through the drop down box in the left upper area (2). To define new or delete ROIs use the corresponding button. **Be careful:** if you delete all ROIs, the tool will ask you to define minimum one manually in the `config.ini`.
|
||||
The order of the ROIs correspond to the position of the digit / analog pointer in the final readout number. The order can be changed with the button "move Next" / "move Previous" (3).
|
||||
|
||||
In order to have a good recognition, the active ROI has two rectangles for alignment:
|
||||
|
||||
<img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/config_s5_ROIs_details.jpg" align="middle">
|
||||
|
||||
* The outer rectangle is the final size of the ROI
|
||||
* More important is the inner smaller rectangle. This should tightly fit around the number itself in x- and in y-dimension. Maybe you need to unlock the aspect ratio to change x- and y-size independendly
|
||||
* The line in the middle should go through the middle of the number (in case it is not moving in or out)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
To save the result push "Save all to config.ini" (4).
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
**Attention:** Currently you have to reboot the ESP32 to take the changes in the `config.ini` to take place.
|
||||
|
||||
This steps are running on the ESP32 directly. So be patient with the results.
|
||||
85
docs/Hardware-Compatibility.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# Hardware Compatibility
|
||||
|
||||
See also https://github.com/jomjol/AI-on-the-edge-device/discussions/1732
|
||||
|
||||
General Remark: similar "looking" Board can have major differences:
|
||||
|
||||
- Processor
|
||||
- Ram (Size! & Type) -> this Project needs at least 4MB RAM!
|
||||
- Flashrom
|
||||
- Camera Modules
|
||||
- Onboard/External Antenna
|
||||
- Quality of Components
|
||||
- Manufacture Quality of the PCB and soldering
|
||||
- Different Components
|
||||
- "Clone" Components -> ESPxx
|
||||
- etc.
|
||||
|
||||
This can cause different Power Consumption, Power Requirements, compatibility issues, etc.
|
||||
|
||||
Most manufacturers and sellers buy what's cheap today on the Asian markets. In the end, it looks like it is sometimes a trial and error approach which ESP32-CAM Module works reliable.
|
||||
|
||||
Below you find some remarks and experiences from the community:
|
||||
|
||||
# ESP32 core itself
|
||||
|
||||
| Chip Version | Image | Status |
|
||||
| ------------------------- | ----- | -------- |
|
||||
| ESP32-D0WDQ6 (revision 1) | | **okay** |
|
||||
|
||||
# PSRAM
|
||||
|
||||
| Labeling on PSRAM module | Image | Status |
|
||||
| ---------------------------------------------- | ----- | ------------------------- |
|
||||
| IPUS<br/>IPS640LS0<br/>1815XBGN | | **okay** |
|
||||
| AP MEMORY<br/>6404L-3SOR<br/>1040H<br/>110089G | | **okay** |
|
||||
| AP MEMORY<br/>6404L-3SQR<br/>12205<br/>150047G | | **okay**<br />8MB |
|
||||
| AP MEMORY<br/>6404L-350R<br/>1120A<br/>130027G | | **NOT OK**<br />PSRAM not accessible|
|
||||
| AP MEMORY<br/>6404L-35QR<br/>11208<br/>130025G | | **NOT OK**<br />PSRAM not accessible|
|
||||
| AP MEMORY<br/>6404L-3SQR<br/>13100<br/>180026G| | **NOT OK**<br />PSRAM not accessible|
|
||||
| AP MEMORY<br/>6404L-3SQR<br/>11207<br/>130024G| | **NOT OK**<br />PSRAM not accessible|
|
||||
| AP MEMORY<br/>1604M-3SQR<br/>0280A<br/>070036G| | **NOT OK**<br />2MB only! |
|
||||
| ESP PSRAM64H 462021<br/>1B00286 | | **okay** |
|
||||
| ESP PSRAM16M 302020<br/> | | **NOT OK**<br />2MB only! |
|
||||
| ESP PSRAM16H 202020<br/>050022G | | **NOT OK**<br />2MB only! |
|
||||
|
||||
# OV2640 - Camera
|
||||
|
||||
The experience with the camera only is based on single modules. It is well possible, that this module had a damage overall and other modules of the same type will work. Give it a try and report to me!
|
||||
|
||||
| Labeling on Flex-Connector | Image | Status |
|
||||
| -------------------------- | ----- | --------------------------------- |
|
||||
| TY-OV2<br/>640-V2.0 | | **okay** |
|
||||
| DCX-OV2<br/>640-V2 | | **okay** |
|
||||
| DC-26<br/>40-V3 | | **okay**: 3x<br/>**NOT OKAY:** 1x |
|
||||
|
||||
|
||||
|
||||
# ESP32 Modules
|
||||
|
||||
| Module | Image | Status |
|
||||
| ------------------------------------------------------------ | ----- | ------------------------------ |
|
||||
| ESP32CAM<br/>Different versions on the market! Especially the PSRAM is sometimes labeled wrong (Label: 4MB, Real: only 2 MB --> will not work!) | | **okay**<br />with >=4 MB PSRAM! |
|
||||
| ESP32-S3-EYE<br />No Flash LED, pins different used (e.g. LCD diskplay) | | **NOT OKAY** |
|
||||
|
||||
|
||||
|
||||
# SD-Cards
|
||||
|
||||
Due to the limited free available gpios (due to all the extensions needed like: camera, sd-card, LED-flash, ...) the sd card is connected in 1-wire mode. There are some cards, that are compatible with the esp32cam module for unknown reasons.
|
||||
It is observed, that smaller cards (up to 4 GB) tend to be more stable and larger cards have more problems. But quite some exceptions in the forums (4 GB cards not working, 16 G cards working like a charm).
|
||||
|
||||
|
||||
# Devices known to work
|
||||
|
||||
Please add links to stores of which you know they work:
|
||||
- https://arduino-projekte.info/produkt/esp32-cam-v2-integriertem-ch340-mit-ov2640-kamera-modul/ ? See https://github.com/jomjol/AI-on-the-edge-device/discussions/1041
|
||||
- https://www.amazon.de/-/en/gp/product/B0B51CQ13R
|
||||
- https://www.reichelt.de/entwicklerboards-esp32-kamera-2mp-25--debo-cam-esp32-p266036.html?PROVID=2788&gclid=CjwKCAiAqaWdBhAvEiwAGAQlttJnV4azXWDYeaFUuNioMICh-jvxKp6Cifmcep9vvtoT2JRCDqBczRoC7Q0QAvD_BwE (27.12.2022)
|
||||
- ...
|
||||
Sandisk 2GB Micro SD Class 2 [Sandisk 2GB](https://www.amazon.co.uk/gp/product/B000N3LL02/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1)
|
||||
AITRIP ESP32 and CAM [ESP-32/CAM](https://www.amazon.co.uk/gp/product/B08X49P8P3/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&psc=1)
|
||||
- [Amazon US - Aideepen ESP32-CAM W BT Board ESP32-CAM-MB Micro USB to Serial Port CH-340G with OV2640 2MP Camera Module Dual Mode](https://www.amazon.com/gp/product/B0948ZFTQZ) with [Amazon US - Cloudisk 5Pack 4GB Micro SD Card 4 GB MicroSD Memory Card Class6](https://www.amazon.com/gp/product/B07QYTP4VN)
|
||||
|
||||
# Weak Wifi
|
||||
The ESP32-CAM supports an external antenna. It requires some soldering skills but can improve the connection quality. See https://randomnerdtutorials.com/esp32-cam-connect-external-antenna/
|
||||
60
docs/Home.md
Normal file
@@ -0,0 +1,60 @@
|
||||
# Welcome to the AI-on-the-edge-device!
|
||||
|
||||
Artificial intelligence based systems have been established in our every days live. Just think of speech or image recognition. Most of the systems relay on either powerful processors or a direct connection to the cloud for doing the calculations up there. With the increasing power of modern processors the AI systems are coming closer to the end user - which is usually called **edge computing**.
|
||||
Here this edge computing is brought into a practical oriented example, where a AI network is implemented on a ESP32 device so: **AI on the edge**.
|
||||
|
||||
## Key features
|
||||
- Tensorflow Lite (TFlite) integration - including easy to use wrapper
|
||||
- Inline Image processing (feature detection, alignment, ROI extraction)
|
||||
- **Small** and **cheap** device (3x4.5x2 cm³, < 10 EUR)
|
||||
- camera and illumination integrated
|
||||
- Web surface to administrate and control
|
||||
- OTA-Interface to update directly through the web interface
|
||||
- API for easy integration
|
||||
|
||||
## Idea
|
||||
|
||||
<img src="https://raw.githubusercontent.com/jomjol/AI-on-the-edge-device/master/images/idea.jpg" width="600">
|
||||
|
||||
|
||||
### Hardware
|
||||
|
||||
<img src="https://raw.githubusercontent.com/jomjol/AI-on-the-edge-device/master/images/watermeter_all.jpg" width="200"><img src="https://raw.githubusercontent.com/jomjol/AI-on-the-edge-device/master/images/main.jpg" width="200"><img src="https://raw.githubusercontent.com/jomjol/AI-on-the-edge-device/master/images/size.png" width="200">
|
||||
|
||||
|
||||
|
||||
### Web interface
|
||||
|
||||
<img src="https://raw.githubusercontent.com/jomjol/AI-on-the-edge-device/master/images/watermeter.jpg" width="600">
|
||||
|
||||
### Configuration Interface
|
||||
|
||||
<img src="https://raw.githubusercontent.com/jomjol/AI-on-the-edge-device/master/images/edit_reference.jpg" width="600">
|
||||
|
||||
|
||||
|
||||
**Have fun in studying the new possibilities and ideas**
|
||||
|
||||
This is about image recognition and digitalization, done totally on a cheap ESP32 board using artificial intelligence in form of convolutional neural networks (CNN). Everything, from image capture (OV2640), image preprocessing (auto alignment, ROI identification) all the way down to the image recognition (CNN structure) and result plausibility is done on a cheap 10 EUR device.
|
||||
|
||||
This all is integrated in an easy to do setup and use environment, taking care for all the background processing and handling, including regular job scheduler. The user interface is an integrated web server, that can be easily adjusted an offers the data as an API in different options.
|
||||
|
||||
The task to be demonstrated here is a automated readout of an analog water meter. The water consumption is to be recorded within a house automatization and the water meter is totally analog without any electronic interface. Therefore the task is solved by taking regularly an image of the water meter and digitize the reading.
|
||||
|
||||
There are two types of CNN implemented, a classification network for reading the digital numbers and a single output network for digitalize the analog pointers for the sub digit readings.
|
||||
|
||||
This project is a evolution of the [water-meter-system-complete](https://github.com/jomjol/water-meter-system-complete), which uses ESP32-CAM just for taking the image and a 1GB-Docker image to run the neural networks backbone. Here everything is integrated in an ESP32-CAM module with 8MB of RAM and a SD-Card as data storage.
|
||||
|
||||
|
||||
## Functionality
|
||||
This systems implements several functions:
|
||||
|
||||
* water meter readout
|
||||
* picture provider
|
||||
* file server
|
||||
* OTA functionality
|
||||
* graphical configuration manager
|
||||
* web server
|
||||
|
||||
The details can be found here: [[Integrated Functions]]
|
||||
|
||||
21
docs/Install-a-rolling-(unstable)-release.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# :bangbang: Living on the edge :bangbang:
|
||||
:bangbang: The branch [rolling](https://github.com/jomjol/AI-on-the-edge-device/tree/rolling) contains the latest version of the Firmware and the Web Interface. It is work in progress, don't expect it to work stable or be an improvement for your AI-on-the-edge-device! Also it might break the OTA Update and then require manual flashing over USB! :bangbang:
|
||||
|
||||
# Still here?
|
||||
|
||||
Grab the latest build from https://github.com/jomjol/AI-on-the-edge-device/actions and proceed as following:
|
||||
1. Pick the most top successful (green) build.
|
||||
2. Download the `firmware__extract_before_upload__only_needed_for_migration_from_11.2.0` and extract it (its a zip file).
|
||||
3. Flash that binary as new firmware.
|
||||
4. Download the `html__only_needed_for_migration_from_11.2.0__2022-09-15_19-13-37__rolling_(042ff18)`. It is also a zip file but you should **not** extract it!
|
||||
5. Flash the zip file als html part.
|
||||
|
||||
The filenames have changed, e.g. right now it is:
|
||||
* AI-on-the-edge-device__manual-setup__rolling_(4b23e0c)
|
||||
* AI-on-the-edge-device__remote-setup__rolling_(4b23e0c)
|
||||
* AI-on-the-edge-device__update__rolling_(4b23e0c)
|
||||
|
||||
Github bot-reply Rolling Build has the following info at the moment:
|
||||
|
||||
You can use the latest [Automatic Build](https://github.com/jomjol/AI-on-the-edge-device/actions/workflows/build.yaml?query=branch%3Arolling) of the the rolling branch. It might already contain a fix for your issue.
|
||||
Pick the most top passing entry (it has a green circle with a tick in it), then scroll down to the Artifacts and download the file named update_*. So I do not know what the manual-setup and remote-setup are used for.
|
||||
151
docs/Installation.md
Normal file
@@ -0,0 +1,151 @@
|
||||
# Installation
|
||||
|
||||
The installation requires multiple steps:
|
||||
1. Get the right hardware and wire it up
|
||||
1. Flash the firmware onto the ESP32
|
||||
1. Write the data to the SD-Card
|
||||
1. Insert the SD-Card into the ESP32 board
|
||||
1. Power/restart it.
|
||||
|
||||
## Hardware
|
||||
#### ESP32-CAM
|
||||
|
||||
* OV2640 camera module
|
||||
* SD-Card slot
|
||||
* 4 MB PSRAM.
|
||||
|
||||
It can be easily found on the typical internet stores, searching for ESP32-CAM for less than 10 EUR.
|
||||
|
||||
#### USB->UART interface
|
||||
|
||||
For first time flashing the firmware a USB -> UART connector is needed. Later firmware upgrades than can be flashed via OTA.
|
||||
|
||||
#### Power supply
|
||||
|
||||
For power supply a 5V source is needed. Most easily this can be done via an USB power supply. The power supply should support minimum 500mA. For buffering current peaks some users reported to use a large elco condensator like a 2200uF between ground and VCC.
|
||||
|
||||
**Attention:** in several internet forums there are problems reported, in case the ESP32-CAM is only supplied with 3.3V.
|
||||
|
||||
#### Housing
|
||||
|
||||
A small 3D-printable example for a very small case can be found in Thingiverse here: https://www.thingiverse.com/thing:4571627
|
||||
|
||||
<img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/main.jpg" width="300"><img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/size.png" width="300">
|
||||
|
||||
|
||||
|
||||
**Attention**: the focus of the OV2640 needs to be adjusted, as it is normally set from ~40cm to infinity. In order to get an image that is big enough, it needs to be changed to about 10cm. Therefore the sealing glue on the objective ring needs to be removed with a scalpel or sharp knife. Afterwards the objective can be rotated clockwise until the image is sharp again.
|
||||
<img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/focus_adjustment.jpg" width="200">
|
||||
|
||||
|
||||
|
||||
### Wiring
|
||||
|
||||
Beside the 5V power supply, only for the first flashing a connection to the USB-UART connector, including a short cut of GPIO0 to GND for bootloader start.
|
||||
|
||||
A example for wiring can be found here:
|
||||
|
||||
<img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/wiring.png" width="600">
|
||||

|
||||
|
||||
|
||||
It is also possible to use external LEDs for the illumination instead of the internal flash LED. This is described here: [[External-LED]]
|
||||
|
||||
|
||||
|
||||
|
||||
## Firmware flashing
|
||||
### Files
|
||||
Grab the firmware from the
|
||||
- [Releases page](https://github.com/jomjol/AI-on-the-edge-device/releases) (Stable, tested versions), or the
|
||||
- [Automatically build development branch](https://github.com/jomjol/AI-on-the-edge-device/actions?query=branch%3Arolling) (experimental, untested versions). Please have a look on https://github.com/jomjol/AI-on-the-edge-device/wiki/Install-a-rolling-%28unstable%29-release first!
|
||||
|
||||
You need:
|
||||
* partitions.bin
|
||||
* bootloader.bin
|
||||
* firmware.bin
|
||||
* html.zip
|
||||
|
||||
|
||||
### Flashing
|
||||
There are several options to flash the firmware. Here three are described:
|
||||
|
||||
#### 1. Web Installer
|
||||
There is a Web Installer available which will work right out of the web browser Edge and Chrome.
|
||||
You can access it with the following link: https://jomjol.github.io/AI-on-the-edge-device
|
||||
|
||||
This is the preferred way for beginners as it also allows access to the USB Log:
|
||||
|
||||
[<img src=https://user-images.githubusercontent.com/1783586/200926652-293e9a1c-86ec-4b79-9cef-3e6f3c47ea4b.png height=200px>](https://user-images.githubusercontent.com/1783586/200926652-293e9a1c-86ec-4b79-9cef-3e6f3c47ea4b.png)
|
||||
|
||||
|
||||
#### 2. Using the Flash Tool from Espressif
|
||||
|
||||
The flashing of the firmware can be done with the "Flash Download Tool" from espressif, that can found [here](https://www.espressif.com/en/support/download/other-tools)
|
||||
|
||||
Download and extract the Flash tool, after starting choose "Developer Mode", then "ESP32-DownloadTool" and you are in the setup of the flashing tool. Connect the ESP32-CAM with the USB-UART connection and identify the COM-Port.
|
||||
|
||||
:bangbang: **Attention** :bangbang: if you reflashing the code again, it is strongly recommended to erase the flash memory before flashing the firmware. Especially if you used OTA in between, which might cause remaining information on the flash, to still boot from an old image in the OTA-area, which is not erased by a normal flash.
|
||||
|
||||
But your ESP32 in bootloader mode and push start, then it will identify the board and you can configure the bin-configuration according to the following table:
|
||||
|
||||
| Filename | Offset |
|
||||
|----------------|--------:|
|
||||
| bootloader.bin | 0x1000 |
|
||||
| partitions.bin | 0x8000 |
|
||||
| firmware.bin | 0x10000 |
|
||||
|
||||
<img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/Flash_Settings.png" width="400">
|
||||
|
||||
Alternatively it can be directly flashed from the development environment - here PlatformIO. But this is rather for experienced users, as the whole development chain needs to be installed for compilation.
|
||||
|
||||
|
||||
#### 3. Using esptool in python directly
|
||||
|
||||
For this you need a python environment (e.g. Anaconda in Win10).
|
||||
Here you need to install the esptool:
|
||||
|
||||
```
|
||||
pip install esptool
|
||||
```
|
||||
Then connect the ESP32 with the USB-UART connector to the system, put it in bootmode and with the following command you can erase the flash and flash bootloader, partitions and firmware in two steps:
|
||||
|
||||
```
|
||||
esptool erase_flash
|
||||
esptool write_flash 0x01000 bootloader.bin 0x08000 partitions.bin 0x10000 firmware.bin
|
||||
```
|
||||
- Maybe you need to specify the COM-port if it is not detected by default.
|
||||
- If the erase command throws the error `A fatal error occurred: ESP32 ROM does not support function erase_flash.`, your `esptool` might be too old, see https://techoverflow.net/2022/02/08/how-to-fix-esp32-a-fatal-error-occurred-esp32-rom-does-not-support-function-erase_flash/
|
||||
|
||||
With some Python installations this may not work and you’ll receive an error, try `python -m pip install esptool` or `pip3 install esptool`
|
||||
|
||||
Further recommendations can be found on the [espressif webpage](https://docs.espressif.com/projects/esptool/en/latest/esp32/installation.html)
|
||||
|
||||
## SD-Card
|
||||
The program expects a SD-Card installed with certain directory and file structure in order to work properly.
|
||||
For the first setup take the `initial_esp32_setup_*.zip` from the [Release](https://github.com/jomjol/AI-on-the-edge-device/releases) page and extract the content of the contained `sd-card.zip` onto your SD-Card.
|
||||
|
||||
This must only be done once as further updates of the SD-Card are possible with the OTA Update.
|
||||
|
||||
### :bangbang: Attention :bangbang:
|
||||
|
||||
- Due to the limited availability of GPIOs (OV2640, Flash-Light, PSRAM & SD-Card) the communication mode to the SD card is limited to 1-line SD-Mode. It showed up, that this results in problems with very large SD-Cards (64GB, sometimes 32 GB) and some no name low cost SD-cards.
|
||||
- There must be no partition table on the SD-card (no GPT, but only MBR for the single partition)
|
||||
- Following setting are necessary for formating the SD-card: **SINGLE PARTITION, MBR, FAT32 - 32K. NOT exFAT**
|
||||
- Some ESP32 devices share their SD-card and/or camera GPIOs with the pins for TX and RX. If you see errors like “Failed to connect” then your chip is probably not entering the bootloader properly. Remove the respective modules temporarily to free the GPIOs for flashing. You may find more information about troubleshooting on the [homepage of Espressif](https://docs.espressif.com/projects/esptool/en/latest/esp8266/troubleshooting.html).
|
||||
|
||||
**The ESP32 indicates problems with the SD card during startup with a fast not ending blinking.**
|
||||
**In this case, please try another SD card.**
|
||||
|
||||
## WLAN
|
||||
|
||||
The access to the WLAN is configured in the "wlan.ini" directly on the root directory of the sd-card. Just write the corresponding SSID and password before the startup of the ESP32. This file is hidden from external access (e.g. via Filemanager) to protect the password.
|
||||
|
||||
After power on the connection status is indicated by 3x blinking of the red on board LED.
|
||||
|
||||
WLAN-Status indication:
|
||||
|
||||
* **5 x** fast blinking (< 1 second): connection still pending
|
||||
* **3 x** slow blinking (1 second on/off): WLAN connection established
|
||||
|
||||
It is normal that at first one or two times a pending connection is indicated.
|
||||
56
docs/Integrated Functions.md
Normal file
@@ -0,0 +1,56 @@
|
||||
## wasserzaehler
|
||||
|
||||
```http://IP-ESP32/wasserzaehler.html```
|
||||
|
||||
This is the main purpose of this device. It returns the converted image as a number with different option. The output can be modified either by the configuration parameters or by HTML parameters.
|
||||
|
||||
Details can be found here: tbd
|
||||
|
||||
|
||||
|
||||
## Picture Server
|
||||
|
||||
```http://IP-ESP32/capture```
|
||||
|
||||
```http://IP-ESP32/capture_with_flashlight```
|
||||
|
||||
This is a implementation of the camera interface of https://github.com/jomjol/water-meter-picture-provider
|
||||
|
||||
It is fully compatible including the parameters (```quality```=..., ``size=...`` ) . This allows to use this ESP32 system in parallel to the corresponding docker system: https://github.com/jomjol/water-meter-system-complete, from which this project is basically the successor.
|
||||
|
||||
|
||||
|
||||
## File server
|
||||
|
||||
Access: ```http://IP-ESP32/fileserver/```
|
||||
|
||||
Simple file server, that allows viewing, upload, download and deleting of single files of the SD-card content.
|
||||
|
||||
The usage is self explaining. The file path or file can directly be accessed by the URL after file server.
|
||||
|
||||
Example for ```config.ini``` : ```http://IP-ESP/fileserver/config/config.ini```
|
||||
|
||||
|
||||
|
||||
## OTA-Update
|
||||
|
||||
```http://IP-ESP32/ota?file=firmware.bin```
|
||||
|
||||
Here an over the air update can be triggered. The firmware file is expected to be located in the sub directory ```/firmware/``` and can be uploaded with the file server. By the parameter ```file``` the name of the firmware file needs to be given.
|
||||
|
||||
|
||||
|
||||
## Reboot
|
||||
|
||||
```http://IP-ESP32/reboot```
|
||||
|
||||
A reboot with a delay of 5 seconds is initiated, e.g. after firmware update.
|
||||
|
||||
**ATTENTION**: currently this is not working properly - hardware power off is needed instead. **Work in progress!**
|
||||
|
||||
|
||||
|
||||
## Simple Web Server
|
||||
|
||||
If none of the above URLs are fitting, a very simple web server checks, if there is a fitting file from the sub directory ```/html```
|
||||
This can be used for a very simple web server for information or simple web pages.
|
||||
208
docs/Integration-Home-Assistant.md
Normal file
@@ -0,0 +1,208 @@
|
||||
# Integration into Home Assistant
|
||||
There are 3 ways to get the data into your Home Assistant:
|
||||
1. Using MQTT (Automatically Setup Entities using Homeassistant MQTT Discovery)
|
||||
1. Using MQTT (Manually Setup Entities)
|
||||
2. Using REST calls
|
||||
|
||||
The first one is the easier way if you already have MQTT in use.
|
||||
|
||||
## Using MQTT (Automatically Setup Entities using Homeassistant MQTT Discovery)
|
||||
|
||||
:bangbang: This feature will be available with the next release!
|
||||
|
||||
Starting with Version `>12.0.1`, AI-on-the-edge-devices support Homeassistant Discovery.
|
||||
1. Check [here](https://www.home-assistant.io/docs/mqtt/discovery/) to learn more about it and how to enable it in Homeassistant.
|
||||
1. You also have to enable it in the MQTT settings of your device:
|
||||
|
||||

|
||||
|
||||
Make sure to select the right Meter Type to get the right units!
|
||||
|
||||
On the next start of the device, it will send discovery topics and Homeassistant should pick them up and show them under `Settings > Integrations > MQTT`:
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
|
||||
### Using MQTT (Manually Setup Entities)
|
||||
First make sure with an MQTT client (for example [MQTT Explorer](http://mqtt-explorer.com/)) that MQTT works as expected and to get a list of the available topics!
|
||||
|
||||
Then add a sensor for each property:
|
||||
```yaml
|
||||
mqtt:
|
||||
sensor:
|
||||
- state_topic: "wasserzaehler/main/value"
|
||||
name: "Watermeter Value"
|
||||
unique_id: watermeter_value
|
||||
unit_of_measurement: 'm³'
|
||||
state_class: total_increasing
|
||||
device_class: water # Needs Homeassistant 2022.11!
|
||||
icon: 'mdi:water-pump'
|
||||
availability_topic: wasserzaehler/connection
|
||||
payload_available: connected
|
||||
payload_not_available: connection lost
|
||||
|
||||
- state_topic: "wasserzaehler/main/rate"
|
||||
name: "Watermeter Rate"
|
||||
unique_id: watermeter_rate
|
||||
unit_of_measurement: 'm³/min'
|
||||
state_class: measurement
|
||||
device_class: water # Needs Homeassistant 2022.11!
|
||||
icon: 'mdi:water-pump'
|
||||
availability_topic: wasserzaehler/connection
|
||||
payload_available: connected
|
||||
payload_not_available: connection lost
|
||||
|
||||
- state_topic: "wasserzaehler/main/error"
|
||||
name: "Watermeter Error"
|
||||
unique_id: watermeter_error
|
||||
icon: "mdi:water-alert"
|
||||
availability_topic: wasserzaehler/connection
|
||||
payload_available: connected
|
||||
payload_not_available: connection lost
|
||||
|
||||
- state_topic: "wasserzaehler/uptime"
|
||||
name: "Watermeter Uptime"
|
||||
unique_id: watermeter_uptime
|
||||
unit_of_measurement: 's'
|
||||
state_class: measurement
|
||||
device_class: duration
|
||||
entity_category: diagnostic
|
||||
icon: "mdi:timer-outline"
|
||||
availability_topic: wasserzaehler/connection
|
||||
payload_available: connected
|
||||
payload_not_available: connection lost
|
||||
```
|
||||
If you run the discovery once, you can also extract the information from there (MQTT Info, untested):
|
||||
```yaml
|
||||
mqtt: # Extracted form the Discovery but untested!
|
||||
sensor:
|
||||
- name: Value
|
||||
unique_id: wasserzaehler-main_value
|
||||
icon: mdi:gauge
|
||||
state_topic: wasserzaehler/main/value
|
||||
unit_of_measurement: m³
|
||||
device_class: water
|
||||
state_class: total_increasing
|
||||
availability_topic: wasserzaehler/connection
|
||||
payload_available: connected
|
||||
payload_not_available: connection lost
|
||||
```
|
||||
|
||||
If you want to convert the `m³` to `l`, use a template sensor:
|
||||
```yaml
|
||||
template:
|
||||
- sensor:
|
||||
- name: "Watermeter in l"
|
||||
unique_id: watermeter_in_l
|
||||
icon: "mdi:gauge"
|
||||
state: "{{ states('sensor.watermeter_value')|float(default=0) * 1000 }}" # Convert 1 m3 => 1000 l
|
||||
unit_of_measurement: l
|
||||
availability: "{{ states('sensor.watermeter_value') not in ['unknown', 'unavailable', 'none'] }}"
|
||||
```
|
||||
|
||||
If you you want to have the consumption per day, you can use an [Utility Meter](https://www.home-assistant.io/integrations/utility_meter/).
|
||||
it is a helper and can be used to reset the total increasing values once a day
|
||||
|
||||
```yaml
|
||||
utility_meter:
|
||||
utility_meter_gas_per_day:
|
||||
source: sensor.gasmeter_value
|
||||
cycle: daily
|
||||
|
||||
utility_meter_water_per_day:
|
||||
source: sensor.watermeter_value
|
||||
cycle: daily
|
||||
```
|
||||
|
||||
Note that you also can add it using the UI.
|
||||
|
||||
### Examples
|
||||

|
||||
|
||||

|
||||
|
||||
### Statistics Graph
|
||||
Creating Statistics Graphs (eg. usage per day) is easy using the [Energy Dashboard](https://www.home-assistant.io/home-energy-management/):
|
||||

|
||||
|
||||
Note that there seems to be a bug in the graph, see https://github.com/home-assistant/frontend/issues/13995!
|
||||
|
||||
|
||||
### InfluxDb Graphs
|
||||
If you have setup InfluxDB already, it is also possible to fetch statistics from there, eg. daily usage:
|
||||
```
|
||||
from(bucket: "HomeAssistant")
|
||||
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|
||||
|> filter(fn: (r) => r["entity_id"] == "wasserverbrauch_tag")
|
||||
|> filter(fn: (r) => r["_field"] == "value")
|
||||
|> timeShift(duration: -1d)
|
||||
|> aggregateWindow(every: 1d, fn: max, createEmpty: false)
|
||||
|> yield(name: "mean")
|
||||
```
|
||||
|
||||

|
||||
|
||||
|
||||
## Using REST
|
||||
When using REST, Home Assistant has to periodically call an URL on the ESP32 which in return provides the requested data.
|
||||
|
||||
See [REST API](https://github.com/jomjol/AI-on-the-edge-device/wiki/REST-API) for a list of available URLs.
|
||||
|
||||
The most practical one is the `json` entrypoint which provides the most relevant data JSON formated:
|
||||
`http://<IP>/json`
|
||||
This would return:
|
||||
```JSON
|
||||
{
|
||||
"main":
|
||||
{
|
||||
"value": "512.3020",
|
||||
"raw": "0512.3020",
|
||||
"error": "no error",
|
||||
"rate": 0.000000,
|
||||
"timestamp": "2022-10-02T20:32:06"
|
||||
[..]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
To do such a REST call, you need to create a REST sensor:
|
||||
```yaml
|
||||
sensor:
|
||||
- platform: rest
|
||||
name: "Gasmeter JSON"
|
||||
resource: http://<IP>/json
|
||||
json_attributes:
|
||||
- main
|
||||
value_template: '{{ value_json.value }}'
|
||||
headers:
|
||||
Content-Type: application/json
|
||||
scan_interval: 60
|
||||
|
||||
template:
|
||||
sensor:
|
||||
- name: "Gasmeter Value from JSON"
|
||||
unique_id: gas_meter_value_from_json
|
||||
state: "{{ state_attr('sensor.gasmeter_json','main')['value'] }}"
|
||||
unit_of_measurement: 'm³'
|
||||
|
||||
- name: "Watermeter Value from JSON"
|
||||
unique_id: water_meter_value_from_json
|
||||
state: >-
|
||||
{{ state_attr('sensor.watermeter_json','main')['value'] | float }}
|
||||
unit_of_measurement: 'm³'
|
||||
device_class: water
|
||||
state_class: total_increasing
|
||||
icon: mdi:gauge
|
||||
|
||||
```
|
||||
See also https://community.home-assistant.io/t/rest-sensor-nested-json/243420/9
|
||||
|
||||
|
||||
#### Photo
|
||||
REST can also be used to show the photo of the last round:
|
||||
|
||||

|
||||
|
||||
To access it, use `http://<IP>/img_tmp/alg_roi.jpg` resp `http://<IP>/img_tmp/raw.jpg`.
|
||||
94
docs/Learn-models-with-your-own-images.md
Normal file
@@ -0,0 +1,94 @@
|
||||
If your device has new, different digits and the existing models don't recognize them well, you can collect your own images and train the model.
|
||||
|
||||
But before you do this, please check if your type really is not contained yet in the training data, see [digits](https://jomjol.github.io/neural-network-digital-counter-readout) resp. [pointers](https://jomjol.github.io/neural-network-analog-needle-readout/) for an overview of images used for the training
|
||||
|
||||
The neural network is trained on base of a set of images, that have been collected over time. If your digits are included or at least very similar to included images, the chance is very high that the neural network is working fine for you as well.
|
||||
|
||||
The neural network configuration is stored in the TensorFlow Lite format as `filename.tfl` or `filename.tflite` in the `/config` directory. It can be updated by uploading the new file and activating it on the configuration page or in the config file `/config/config.ini`.
|
||||
|
||||
In order to incorporate new digits a training set of images is required. The training images needs to be collected in the final setup with the help of the `Digits` or `Analog` log settings (not to be confused with the `Data` or `Debug` log). Enable the logging of the images on the configuration page or in the config file `/config/config.ini`:
|
||||
|
||||

|
||||
|
||||
Now wait, until you have an image of each digit of every type on the SD card. Ideally remove the SD card from the camera and search for two to three images of each digit (**not more! :-)**). The format can be jpg.
|
||||
|
||||
|
||||
## Collecting images for dig-class100/dig-cont/ana-class100
|
||||
|
||||
[Collectmeterdigits](https://github.com/haverland/collectmeterdigits) and [collectmeteranalog](https://github.com/haverland/collectmeteranalog) helps you to collect the images easily. Read the project readme for detailed instructions.
|
||||
|
||||
## Train the model
|
||||
|
||||
For training the model you will need a python and Jupyter installation.
|
||||
|
||||
All current labeled images you can find under [ziffer_sortiert_raw](https://github.com/jomjol/neural-network-digital-counter-readout/tree/master/ziffer_sortiert_raw)
|
||||
|
||||
### dig-class11 models (digits)
|
||||
|
||||
Fork and checkout [neural-network-digital-counter-readout](https://github.com/jomjol/neural-network-digital-counter-readout).
|
||||
|
||||
Install all requirements for running the notebooks.
|
||||
|
||||
```shell
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
Put your labeled images into `/ziffer_sortiert_raw` folder and run
|
||||
|
||||
1. [Image_Preparation.ipynb](https://github.com/jomjol/neural-network-digital-counter-readout/blob/master/Image_Preparation.ipynb)
|
||||
2. [Train_CNN_Digital-Readout-Small-v2.ipynb](https://github.com/jomjol/neural-network-digital-counter-readout/blob/master/Train_CNN_Digital-Readout-Small-v2.ipynb)
|
||||
|
||||
It creates a dig-class11_xxxx_s2.tflite model, you can upload to the `config` folder on your device and test it.
|
||||
|
||||
|
||||
### dig-class100 / dig-cont models (digits)
|
||||
|
||||
Fork and checkout [neural-network-analog-needle-readout](https://github.com/jomjol/neural-network-analog-needle-readout).
|
||||
|
||||
All labeled images you can find under [Images](https://github.com/haverland/Tenth-of-step-of-a-meter-digit/tree/master/images)
|
||||
|
||||
Install all requirements for running the notebooks.
|
||||
|
||||
```shell
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
Put your labeled images into `images/collected/<typeofdevice>/<your_short>/`
|
||||
|
||||
Run [dig-class100-s2.ipynb](https://github.com/haverland/Tenth-of-step-of-a-meter-digit/blob/master/dig-class100-s2.ipynb). The model to upload to your device you can find under '/output'.
|
||||
|
||||
|
||||
|
||||
### ana-class100/ana-cont models (analog pointers)
|
||||
|
||||
Fork and checkout [neural-network-analog-needle-readout](https://github.com/jomjol/neural-network-analog-needle-readout).
|
||||
|
||||
All labeled images you can find under [data_raw_all](https://github.com/jomjol/neural-network-analog-needle-readout/tree/main/data_raw_all)
|
||||
|
||||
Install all requirements for running the notebooks.
|
||||
|
||||
```shell
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
Put your labeled images into `images/collected/<typeofdevice>/<your_short>/`
|
||||
|
||||
After every adding of images you need to run [Image_Preparation.ipynb](https://github.com/jomjol/neural-network-analog-needle-readout/blob/main/Image_Preparation.ipynb) before you train the models.
|
||||
|
||||
Run [Train_CNN_Analog-Readout_100-Small1_Dropout.ipynb](https://github.com/jomjol/neural-network-analog-needle-readout/blob/main/Train_CNN_Analog-Readout_100-Small1_Dropout.ipynb) and/or [Train_CNN_Analog-Readout_Version-Small2.ipynb](https://github.com/jomjol/neural-network-analog-needle-readout/blob/main/Train_CNN_Analog-Readout_Version-Small2.ipynb). The model to upload to your device you can find in the project folder.
|
||||
|
||||
|
||||
## Share your images
|
||||
|
||||
If the results are good you can share the images as pull-request. Please images only!
|
||||
|
||||
If you not able to create a pull request or don't know what it is, open an [issue](https://github.com/jomjol/AI-on-the-edge-device/issues) and put the zipped images in it.
|
||||
|
||||
### Images can be rejected if
|
||||
|
||||
* As same as dig-class11 collected, more than 1000 images of your device are really to much.
|
||||
* images are not good configured (ROIs) will be rejected. It reduces the accuracy of the networks.
|
||||
* Images with too little focus will be rejected.
|
||||
* Images with too much blur are rejected.
|
||||
|
||||
Our models are to small to recognize everything in any quality. So we use only images of medium or good quality.
|
||||
80
docs/MQTT-API.md
Normal file
@@ -0,0 +1,80 @@
|
||||
# General Information
|
||||
The device is capable to register to a MQTT broker to publish data and subscribe to specific topics.
|
||||
|
||||
The MQTT service has to be enabled and configured properly in the device configuration via web interface (`Settings` -> `Configuration` -> section `MQTT`)
|
||||
|
||||
The following parameters have to be defined:
|
||||
* URI
|
||||
* MainTopic (optional, if not set, the hostname is used)
|
||||
* ClientID (optional, if not set, `AIOTED-` + the MAC address gets used to make sure the ID is unique)
|
||||
* User (optional)
|
||||
* Password (optional)
|
||||
* RetainFlag (optional)
|
||||
|
||||
# Published topics
|
||||
|
||||
## Status
|
||||
`MainTopic`/{status topic}, e.g. `watermeter/status`
|
||||
* ### Connection
|
||||
|
||||
* ### Interval
|
||||
|
||||
* ### MAC
|
||||
|
||||
* ### IP
|
||||
|
||||
* ### Hostname
|
||||
|
||||
* ### Uptime
|
||||
|
||||
* ### FreeMem
|
||||
|
||||
* ### WifiRSSI
|
||||
|
||||
* ### CPUTemp
|
||||
|
||||
* ### Status
|
||||
|
||||
## Result
|
||||
`MainTopic`/{NumberName}/{result topic}, e.g. `watermeter/main/value`
|
||||
|
||||
* ### Value
|
||||
|
||||
* ### Raw
|
||||
|
||||
* ### Error
|
||||
|
||||
* ### JSON
|
||||
|
||||
* ### Rate
|
||||
|
||||
* ### Rate_per_time_unit
|
||||
The time Unit gets set with the Homeassistant Discovery, eg. `h` or `m` (minutes)
|
||||
|
||||
* ### Rate_per_digitalization_round
|
||||
The `interval` defines when the next round gets triggered
|
||||
|
||||
* ### Changeabsolut
|
||||
|
||||
* ### Timestamp
|
||||
|
||||
* ### JSON
|
||||
All relevant results in JSON syntax
|
||||
|
||||
## GPIO
|
||||
`MainTopic`/{GPIO topic}, e.g. `watermeter/GPIO/GPIO12`
|
||||
|
||||
* ### GPIO/GPIO{PinNumber}
|
||||
Depending on device configuration (`Settings` --> `Configuration` --> Chapter `GPIO`)
|
||||
|
||||
|
||||
# Subscibed topics
|
||||
`MainTopic`/{subscribed topic}, e.g. `watermeter/ctrl/flow_start`
|
||||
|
||||
## Control
|
||||
|
||||
* ### Ctrl/flow_start
|
||||
Trigger a flow start by publishing to this topic (any character, length > 0)
|
||||
|
||||
* ### GPIO/GPIO{PinNumber}
|
||||
Depending on device configuration (`Settings` --> `Configuration` --> Chapter `GPIO`)
|
||||
130
docs/Migrate-Old-Config-To-New-Config.md
Normal file
@@ -0,0 +1,130 @@
|
||||
# Migration from water-meter „old“ to water-meter “AI-on-the-edge-device”
|
||||
|
||||
|
||||
|
||||
There are only some few steps necessary to migrate your old system to the new one.
|
||||
|
||||
Please follow the following steps:
|
||||
|
||||
#### 1. Follow the installation guide to flash the ESP32CAM and prepare a SD-Card with the content of the master
|
||||
|
||||
#### 2. Save the following files from the old Docker system on your PC:
|
||||
|
||||
* Reference Points 1-3 (only 2 needed)
|
||||
* `Config.ini`
|
||||
|
||||
#### 3. Copy Reference Points 1-3 onto the new water-meter system (Directory `/config`)
|
||||
|
||||
**Please note only two Reference Points are supported in the new system.**
|
||||
|
||||
#### 4. Open new `config.ini` File:
|
||||
|
||||
Insert from the old `Config.ini` file `[alignment]` and `[alignment.ref0]` and `[alignment.ref1]` section the two Ref x and y position and the `initial_rotation_angle= 123`into the new `Config.ini` File, e.g.:
|
||||
|
||||
###### Old:
|
||||
```
|
||||
[alignment.ref0]
|
||||
image=./config/RB01_65x65.jpg
|
||||
pos_x=28
|
||||
pos_y=63
|
||||
|
||||
[alignment.ref1]
|
||||
image=./config/RB02_50x35.jpg
|
||||
pos_x=497
|
||||
pos_y=127
|
||||
|
||||
[alignment]
|
||||
initial_rotation_angle=180
|
||||
```
|
||||
|
||||
###### New:
|
||||
|
||||
```
|
||||
[Alignment]
|
||||
InitalRotate=180
|
||||
/config/RB01_65x65.jpg 28, 63
|
||||
/config/RB02_50x35.jpg 497, 127
|
||||
SearchFieldX = 20
|
||||
SearchFieldY = 20
|
||||
```
|
||||
|
||||
|
||||
#### 5. Insert the old Digit Values into the new `Config.ini` File, e.g.:
|
||||
|
||||
###### Old:
|
||||
```
|
||||
[Digital_Digit.ziffer1]
|
||||
pos_x=265
|
||||
pos_y=117
|
||||
dx=28
|
||||
dy=51
|
||||
|
||||
[Digital_Digit.ziffer2]
|
||||
pos_x=310
|
||||
pos_y=117
|
||||
dx=28
|
||||
dy=51
|
||||
|
||||
[Digital_Digit.ziffer3]
|
||||
pos_x=354
|
||||
pos_y=117
|
||||
dx=28
|
||||
dy=51
|
||||
|
||||
[Digital_Digit.ziffer4]
|
||||
pos_x=399
|
||||
pos_y=117
|
||||
dx=28
|
||||
dy=51
|
||||
|
||||
[Digital_Digit.ziffer5]
|
||||
pos_x=445
|
||||
pos_y=115
|
||||
dx=28
|
||||
dy=51
|
||||
```
|
||||
|
||||
###### New:
|
||||
```
|
||||
[Digits]
|
||||
Model=/config/dig0630s3.tflite
|
||||
;LogImageLocation = /log/digit
|
||||
ModelInputSize 20, 32
|
||||
digit1, 265, 117, 28, 51
|
||||
digit2, 310, 117, 28, 51
|
||||
digit3, 354, 117, 28, 51
|
||||
digit4, 399, 117, 28, 51
|
||||
digit5, 445, 115, 28, 51
|
||||
```
|
||||
|
||||
|
||||
#### 6. Make sure that you have the same quality and size settings as in your old `Config.ini`
|
||||
|
||||
In the old configuration this was coded in the html-string for the image source:
|
||||
###### Old:
|
||||
```
|
||||
URLImageSource=http://IP-ADRESS/capture_with_flashlight?quality=5&size=VGA
|
||||
```
|
||||
|
||||
Default was Quality=5 and VGA.
|
||||
|
||||
###### New:
|
||||
|
||||
```
|
||||
ImageQuality = 5
|
||||
ImageSize = VGA
|
||||
```
|
||||
|
||||
|
||||
|
||||
#### 7. Repeat the same for the analog section
|
||||
|
||||
#### 8. Insert your SSID and Password into the new wlan.ini File
|
||||
|
||||
#### 9. Compare and edit [ConsistencyCheck] Section with new [PostProcessing] Section
|
||||
|
||||
#### 10. Save new config.ini File in the new System.
|
||||
|
||||
#### 11. Restart the system.
|
||||
|
||||
#### 12. After the first start set manually the PreValue in the new system
|
||||
181
docs/Neural-Network-Types.md
Normal file
@@ -0,0 +1,181 @@
|
||||
This section is describing the different types of neural networks, that are used with the AI-on-the-edge approach and gives an introduction on how and where to use them.
|
||||
|
||||
|
||||
|
||||
### Content
|
||||
|
||||
1) Overview neural network type
|
||||
2) Naming convention
|
||||
3) Overview of trained types and details
|
||||
|
||||
_______________________________
|
||||
|
||||
|
||||
### 1. Overview neural network type
|
||||
|
||||
There are two **types of input**:
|
||||
|
||||
* digits with rolling number (top town)
|
||||
|
||||
* analog pointers (clockwise rotating pointer)
|
||||
|
||||
There are two **types of neural networks**:
|
||||
|
||||
* *classification networks* with discrete output neurons for each result class:
|
||||
* 11 classes for digits (0, 1, ... 8, 9 + "Not-A-Number")
|
||||
* 100 classes for digits or analog pointers (0.1, 0.2, 0.3, ... , 9.7, 9.8, 9.9)
|
||||
* *continuous output networks* with a continuous output in the interval [0, 10[
|
||||
|
||||
No setting of the type in the firmware is necessary. The type can detect by the output structure automatically.
|
||||
|
||||
**Attention:**
|
||||
|
||||
* It is very important to choose the right network type (digits or analog pointers).
|
||||
Technically a wrong network will work and create output, but that would be totally arbitrary
|
||||
* Not all type of pointers are trained in all networks.
|
||||
* For the 11 classes digits network there many different types of digits trained. The reason is, that you 1) only need 20-30 training images and 2) the data collection is ongoing much longer
|
||||
* For the continious and 100 classes network especially for the digits, there are only a view types of digits trained up to now
|
||||
* Therefore sometimes for the digits it is more effective to choose the simpler 11 classes network type (= default).
|
||||
|
||||
_______________________________
|
||||
|
||||
|
||||
### 2. Naming convention
|
||||
|
||||
| | Classification<br />11 classes<br />0, 1, ... 9 + "N" | Classification<br />100 classes<br />0.0, 0.1, ... 9.9 | Continuous<br />Interval<br />[0, 10[ |
|
||||
| ---------------------------------------------------- | ----------------------------------------------------- | ------------------------------------------------------ | ------------------------------------- |
|
||||
| **Digits** <br /> | **dig-class11**_XXX.tflite | **dig-class100**_XXX.tflite | **dig-cont**_XXX.tflite |
|
||||
| **Analog Pointers** <br /> | | **ana-class100**_XXX.tflite | **ana-cont**_XXX.tflite |
|
||||
|
||||
XXX contains the versioning and a parameter for different sizes with the following naming:
|
||||
|
||||
XXX = versioning_sY
|
||||
|
||||
* versioning = version or in newer networks the training data
|
||||
|
||||
* Y = Neural network size (typically s1, s2, ..., s4). Whereas s1 is the maximum sized neural network and s4 is the smallest.
|
||||
|
||||
Optional the naming ends with an "_q" to signal, that the tflite file has been quantized (size reduction with minimum accuracy loss).
|
||||
|
||||
Example: `dig-class11_1410_s2_q.tflite`
|
||||
|
||||
* Classification network for digits with 11 classes (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, N)
|
||||
* Version 1410 = 14.1.0
|
||||
* s2 = Size 2 (Medium)
|
||||
* q = Quantized Version
|
||||
|
||||
|
||||
|
||||
|
||||
_______________________________________________________
|
||||
|
||||
### 3. Overview of trained types and details
|
||||
|
||||
#### 3a. Analog Pointer ("ana-cont_XXX.tflite" & "ana-class100_XXX.tflite")
|
||||
|
||||
This is to transfer the direction of a pointer into a continuous number between 0 and 1, whereas 0 (=1) is the upwards position (12 o'clock), 0.25 corresponds to the 3 o'clock positions and so on. This network is a envolop for all different types of pointers. Currently there are no dedicated network trainings for specific types of pointers.
|
||||
|
||||
There are two types of network structure, currently both are supported. The "class100" is a pure classification network, that might need a bit more accuracy in the labeling. "cont" is a no classic approach with a continuous output off only 2 neurons (details see below).
|
||||
|
||||
##### Types of counters trained:
|
||||
|
||||
| | | | |
|
||||
| ----------------------------------- | ----------------------------------- | ----------------------------------- | ----------------------------------- |
|
||||
|  |  |  |  |
|
||||
|  |  |  | |
|
||||
|
||||
##### Training data needs
|
||||
|
||||
* Quadratic images, minimum size: 32x32 pixel
|
||||
* Typically 100 - 200 images with a resultion of 1/100 of the full rotation (every 0.1 value or 3.6°)
|
||||
* Naming: x.y_ARBITRARY.jpg, where x.y = value 0.0 ... 9.9
|
||||
|
||||
##### CNN Technical details:
|
||||
|
||||
###### Input
|
||||
|
||||
* 32 x 32 RGB images
|
||||
|
||||
###### Output
|
||||
|
||||
* **ana-cont**_XXX.tflite:
|
||||
* 2 neurons with output in range [-1, 1] - representing a sinus / cosinus encoding of the angle
|
||||
* needs to be converted to angle with arctan-hyperbolicus function
|
||||
|
||||
* **ana-class100**_XXX.tflite
|
||||
* 100 neurons representing the classes from 0.0, 0.1, ... 9.8, 9.9
|
||||
|
||||
|
||||
|
||||
|
||||
#### 3b. Digits with 11 classes ("dig-class11_XXX.tflite")
|
||||
|
||||
The digit type is a classical classification network, with 11 classes representing the numbers 0, 1, ... 9 and the special class "N". It is trained for the rolling ring of gas and electric meters. As there is sometime a status between two images, the special class "N" is representing Not-A-Number for the case, that the image cannot be unique classified to one number e.g. because it is between two digits. For this type the lowest amount of training data per type is needed, resulting in a large variety of type being already part of the training set.
|
||||
|
||||
|
||||
##### Types of counters trained:
|
||||
|
||||
| | | | | | | |
|
||||
| -------------------------- | -------------------------- | -------------------------- | -------------------------- | -------------------------- | -------------------------- | -------------------------- |
|
||||
|  |  |  |  |  |  |  |
|
||||
|  |  |  |  |  |  | |
|
||||
| | | | | | | |
|
||||
|
||||
|
||||
##### Training data needs
|
||||
|
||||
* RGB images, with minimum size: 20x32 pixel
|
||||
* Typically 10 - 20 images (1-2 for each digit and an arbitrary number for the "N" class
|
||||
|
||||
* Naming: x_ARBITRARY.jpg, where x = value 0 ... 9 + N
|
||||
|
||||
##### CNN Technical details:
|
||||
|
||||
###### Input
|
||||
|
||||
* 20 x 32 RGB images
|
||||
|
||||
###### Output
|
||||
|
||||
* 11 neurons for image classification (last layer normalized to 1)
|
||||
* Neuron 0 to 9 represent the corresponding numbers "0" to "9"
|
||||
* Neron 10 represent the "Not-A-Number" class, telling, that the image is not uniquely classified
|
||||
|
||||
|
||||
|
||||
#### 3c. Digits with rolling results ("dig-class100_XXX.tflite" & "dig-cont_XXX.tflite")
|
||||
|
||||
This type of network tries to overcome the problem, that there are intermediate values, when a rolling digit is between two numbers. Previous this was the "N" class. In this network type, there are also subdigit values trained, so that the intermediate state can be used as additional information for the algorithms.
|
||||
|
||||
|
||||
##### Types of counters trained:
|
||||
|
||||
| | | | |
|
||||
| ---------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---- |
|
||||
| [[images/dig-cont/dig-cont_1.jpg) | [[images/dig-cont/dig-cont_2a.jpg) [[images/dig-cont/dig-cont_2b.jpg) | [[images/dig-cont/dig-cont_3a.jpg) [[images/dig-cont/dig-cont_3b.jpg) [[images/dig-cont/dig-cont_3c.jpg) | |
|
||||
| | | | |
|
||||
|
||||
|
||||
|
||||
##### Training data needs
|
||||
|
||||
* RGB images, with minimum size: 20x32 pixel
|
||||
* Typically 100 - 200 images (1-2 for each possible position)
|
||||
|
||||
* Naming: x.y_ARBITRARY.jpg, where x.y = 0.0, 0.1, ... 9.9 representing the intermediate state
|
||||
|
||||
##### CNN Technical details:
|
||||
|
||||
###### Input
|
||||
|
||||
* 20 x 32 RGB images
|
||||
|
||||
###### Output
|
||||
|
||||
* **dig-cont**_XXX.tflite:
|
||||
* 10 neurons representing the digits 0, 1, ... 9. The intermediate values are represented by weighted normalized values of two neighboring output neurons
|
||||
* needs to be converted to angle with arctan-hyperbolicus function
|
||||
|
||||
* **dig-class100**_XXX.tflite
|
||||
* 100 neurons representing the classes from 0.0, 0.1, ... 9.8, 9.9
|
||||
|
||||
42
docs/OTA---Update-Firmware-and-Web-Interface.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# Over-The-Air (OTA) Update
|
||||
|
||||
You can do an OTA (over-the-air) update via the graphical user interface.
|
||||
Grab the firmware from the
|
||||
|
||||
* [Releases page](https://github.com/jomjol/AI-on-the-edge-device/releases) (Stable, tested versions), or the
|
||||
* [Automatically build development branch](https://github.com/jomjol/AI-on-the-edge-device/actions?query=branch%3Arolling) (experimental, untested versions). Please have a look on https://github.com/jomjol/AI-on-the-edge-device/wiki/Install-a-rolling-%28unstable%29-release first!
|
||||
|
||||
You need:
|
||||
* firmware.bin
|
||||
* html.zip
|
||||
|
||||
### **General remark:**
|
||||
|
||||
- It is always recommended to upload both files, as they are coupled to each other
|
||||
- If you make a major update, it might be needed to modify the `config.ini` as it's syntax or context has changed
|
||||
- It is recommended to make a **backup** of the `/config` directory, minimum of the `config.ini`.
|
||||
|
||||
|
||||
|
||||
### Access to the update page:
|
||||
|
||||
The graphical OTA update can be accessed in the menue "System":
|
||||
|
||||
* <img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/ota-update-menue.jpg" width="600" align="middle">
|
||||
|
||||
|
||||
### Update
|
||||
|
||||
* <img src="https://raw.githubusercontent.com/jomjol/ai-on-the-edge-device/master/images/ota-update-details.jpg" width="600" align="middle">
|
||||
|
||||
Just follow the steps 1 to 5 to perform the update:
|
||||
|
||||
1. Select (a) and upload (b) the file `firmware.bin`
|
||||
2. Flash the firmware
|
||||
3. Select (a) and upload (b) the file `html.zip`
|
||||
4. Update the html-files
|
||||
5. Reboot
|
||||
|
||||
|
||||
|
||||
**After the reboot with a major change it is recommended to check the configuration settings and save them again**
|
||||
67
docs/REST-API.md
Normal file
@@ -0,0 +1,67 @@
|
||||
Various information is directly accessible over specific REST calls.
|
||||
|
||||
For an up-to-date list search the Github repository for [registered handlers](https://github.com/jomjol/AI-on-the-edge-device/search?q=camuri.uri)
|
||||
|
||||
# Often used APIs
|
||||
Just append them to the IP, separated with a `/`, eg. `http://192.168.1.1/json`
|
||||
|
||||
## Control
|
||||
* ### flow_start
|
||||
|
||||
* ### gpio
|
||||
The `gpio` entrypoint also support parameters:
|
||||
- `/GPIO?GPIO=12&Status=high`
|
||||
|
||||
* ### ota
|
||||
|
||||
* ### ota_page.html
|
||||
|
||||
* ### reboot
|
||||
|
||||
## Results
|
||||
* ### json
|
||||
|
||||
* ### value
|
||||
The `value` entrypoint also support parameters:
|
||||
- `http://<IP>/value?all=true&type=value`
|
||||
- `http://<IP>/value?all=true&type=raw`
|
||||
- `http://<IP>/value?all=true&type=error`
|
||||
- `http://<IP>/value?all=true&type=prevalue`
|
||||
|
||||
* ### img_tmp/alg_roi.jpg
|
||||
Last captured picture
|
||||
|
||||
## Status
|
||||
* ### statusflow
|
||||
|
||||
* ### rssi
|
||||
|
||||
* ### cpu_temperature
|
||||
|
||||
* ### sysinfo
|
||||
|
||||
* ### starttime
|
||||
|
||||
* ### uptime
|
||||
|
||||
## Camera
|
||||
* ### lighton
|
||||
|
||||
* ### lightoff
|
||||
|
||||
* ### capture
|
||||
|
||||
* ### capture_with_flashlight
|
||||
|
||||
* ### save
|
||||
The `save` entrypoint also support parameters:
|
||||
- `http://<IP>/save?filename=test.jpg&delay=3`
|
||||
|
||||
## Logs
|
||||
* ### log
|
||||
Last part of todays log
|
||||
|
||||
* ### logfileact
|
||||
Full log of today
|
||||
|
||||
* ### log.html
|
||||
74
docs/ROI-Configuration.md
Normal file
@@ -0,0 +1,74 @@
|
||||
# ROI (Region of Interest) Configuration
|
||||
|
||||
General remark:
|
||||
> You are using a neural network approach which is trained to fit as many different type of meters as possible. The accuracy will never be 100%. It is normal to see a missing reading once in a while. There there are several precautions to detect this. For details see the section `PostProcessing` on the configuration page.
|
||||
|
||||
The most critical settings for accurate detection are:
|
||||
|
||||
1. Correct setting of the **R**egions **O**f **I**nterest (ROIs) for detection of the image.
|
||||
> This must be done manually for each meter!
|
||||
2. Number type is part of the training set.
|
||||
> Have a look on the [Digital Counters](https://jomjol.github.io/neural-network-digital-counter-readout/) resp. [Analog Needles](https://jomjol.github.io/neural-network-analog-needle-readout) to check if your types are contained. If your number types are **not** contained, you should take the effort to record them so we can add them to the training data. See: [Learn models with your own images](https://github.com/jomjol/AI-on-the-edge-device/wiki/Learn-models-with-your-own-images) on how to create new input.
|
||||
|
||||
_____
|
||||
|
||||
## 1. Correct Setup of ROI
|
||||
Please proceed in the following order!
|
||||
|
||||
Don't forget to save after each step!
|
||||
|
||||
### 1. Image Sharpness
|
||||
Ensure a sharp image of the camera by adjusting the focal length of the ESP OV2640 camera.
|
||||
**Adjust the focus for the clearest possible image** See [these instructions](https://github.com/jomjol/water-meter-picture-provider/blob/master/ESP32-CAM_Lens_Modification.md) for help.
|
||||
|
||||
### 2. Horizontal Alignment
|
||||
Ensure an **exact horizontal alignment** of the number via the alignment / reference setup:
|
||||
|
||||
| :heavy_check_mark: Okay | :x: Not Okay |
|
||||
| ------------------------------ | ---------------------------------- |
|
||||
|  |  |
|
||||
|
||||
### 3. Correct Size for ROI
|
||||
Choose the right size of the ROI:
|
||||
> The configuration of ROIs differs a bit on the model you choose. Below you find the differences between the different AI models. Pick the one you think fits best your purpose. If you don't get to good result, try another model.
|
||||
|
||||
### 4. Model Selection
|
||||
#### dig-class11 Configuration
|
||||
dig-class11 - Models recognize the **complete digit only**. Here it is not relevant if the ROI fits the Border of the digit window.
|
||||
|
||||
For this model, there should be a border of 20% of the image size around the number itself. This border is shown in the ROI setup image by the inner thinner rectangle. This rectangle should fit perfectly around the number when the number has not started to rotate to the next position:
|
||||
|
||||
<img width="300px" src=https://github.com/jomjol/AI-on-the-edge-device/wiki/images/ROI_drawing.jpg>
|
||||
|
||||
| | Example 1 | Example 2 |
|
||||
| ------------ | --------------------------------- | --------------------------------- |
|
||||
| :heavy_check_mark: **Okay** |  |  |
|
||||
| :x: **Not** Okay |  |  |
|
||||
| :x: **Not** Okay |  |  |
|
||||
|
||||
|
||||
|
||||
If you have perfect alignment you and are not getting satisfying results, most probably your numbers are not part of the training data yet. Read on [Learn models with your own images](https://github.com/jomjol/AI-on-the-edge-device/wiki/Learn-models-with-your-own-images) how to add your meter's type of numbers to the training set.
|
||||
|
||||
|
||||
#### dig-class100 / dig-cont Configuration
|
||||
|
||||
These models recognize the tenths (fractions) between the numbers. This model requires a different ROI setup; the height must be set differently and more accurately.
|
||||
|
||||
First, the width can be set as for dig-class11, i.e. 20% margin left and right.
|
||||
|
||||
<img width="455" alt="ROI-setup" src="https://user-images.githubusercontent.com/412645/199028748-c48ef5bb-a8d4-4c77-9faf-763e6cf77351.png">
|
||||
|
||||
The height of the outer rectangle should be set to the upper and lower edge of the number window. To achieve this setting, you need to unlock the aspect ratio:
|
||||
|
||||
<img width="168" alt="unlockAspectRatio" src="https://user-images.githubusercontent.com/412645/199028590-21708ff3-15a3-4415-89b1-c2affcfce003.png">
|
||||
|
||||
|
||||
Here an example:
|
||||
|
||||
| | Example 1 |
|
||||
| ------------ | --------------------------------- |
|
||||
| :heavy_check_mark: **Okay** | <img width="125" alt="dig-class100_OK" src="https://user-images.githubusercontent.com/412645/199028380-7623776e-59b9-4356-ab55-3852253609df.png"> |
|
||||
| :x: **Not** Okay | <img width="125" alt="dig-class100_NOK" src="https://user-images.githubusercontent.com/412645/199028469-3a69ed31-e5c9-4038-a8dc-6d44a42437ed.png"> |
|
||||
|
||||
|
||||
17
docs/Release-creation.md
Normal file
@@ -0,0 +1,17 @@
|
||||
## Preparing for release
|
||||
|
||||
1. [Changelog](https://github.com/jomjol/AI-on-the-edge-device/blob/rolling/Changelog.md) is merged back from `master` branch to `rolling` branch (should be the last step of the previous release creation)
|
||||
1. All changes are documented in the [Changelog](https://github.com/jomjol/AI-on-the-edge-device/blob/rolling/Changelog.md) in `rolling` branch
|
||||
|
||||
|
||||
## Release creation steps
|
||||
1. Merge`rolling` into `master` branch
|
||||
2. Best to wait for the GitHub action to run successfully
|
||||
3. On `master` branch tag the version like `v11.3.1` and don't forget to push it.
|
||||
4. Wait for the GitHub-Action of release creation. After all is done:
|
||||
* the release should be created
|
||||
* the artifacts are downloadable from release
|
||||
* The documented changes were applied to the release
|
||||
5. Merge master back in `rolling`
|
||||
1. In `rolling` create a folder `rolling/docs/releases/download/<VERSION>` and add the `firmware.bin` from one of the release artifacts.
|
||||
1. Update `rolling/docs/manifest.json` with the new version (update the `version` and the last `path` fields)
|
||||
44
docs/Testing.md
Normal file
@@ -0,0 +1,44 @@
|
||||
## Testing Option for VSCode
|
||||
|
||||
You can test your functions directly on the device.
|
||||
|
||||
## Structure
|
||||
|
||||
All tests are under directory "test" in the project and not compiled with default build option of platformio. The main function is in file `test_suite_controlflow.cpp`. In method `app_main()` you can add your own tests.
|
||||
|
||||
<img width="400" alt="image" src="https://user-images.githubusercontent.com/412645/209811778-7efe3b83-8954-4d3b-afa3-d3718fcd9058.png">
|
||||
|
||||
## Include my my own test
|
||||
|
||||
In method `app_main()` of `test_suite_controlflow.cpp` you can add your own tests. Include your test-file in the top like
|
||||
|
||||
```#include "components/jomjol-flowcontroll/test_flow_postrocess_helper.cpp"```
|
||||
|
||||
components is a subfolder of tests here. Not the components directory of root source.
|
||||
|
||||
In the bottom add your test function.
|
||||
|
||||
```RUN_TEST(testNegative);```
|
||||
|
||||
Your test function should have a `TEST_ASSERT_EQUAL_*`. For more information look at [unity-testing](https://docs.platformio.org/en/latest/advanced/unit-testing/frameworks/unity.html).
|
||||
|
||||
## Run tests
|
||||
|
||||
You will need a testing device. best with usb adapter. Before you upload your tests you will need to setup the device with initial setup procedure described in [[Installation]]
|
||||
|
||||
<img width="300" alt="image" src="https://user-images.githubusercontent.com/412645/209813215-e0ea7405-6ff4-48d0-8dab-97bfab6962af.png">
|
||||
|
||||
|
||||
Now you can use Visual Studio Code or a standard console to upload the test code. In VS Code (tab platformio) open _Advanced_ and select _Test_.
|
||||
|
||||
<img width="467" alt="image" src="https://user-images.githubusercontent.com/412645/209813917-ea7fca50-2553-4acf-a8af-ecdac84a01ea.png">
|
||||
|
||||
|
||||
Alternativ you can run it in console/terminal with `platformio test --environment esp32cam`.
|
||||
|
||||
In my environment the serial terminal not opens. I have to do it for myself. You will see much logging. If any test fails it logs it out. Else it logs all test passed in the end.
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you test very much cases in one function, the device runs in stackoverflow and an endless boot. Reduce the count of test cases or split the test function in multiple functions.
|
||||
32
docs/Watermeter-specific-analog---digital-transition.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Understanding the problem
|
||||
|
||||
At first, for the most watermeters the default configuration should be work. But the digit, especially the last digit differs in some devices.
|
||||
|
||||
## "Normal" transition
|
||||
|
||||
In most cases, the transition of the last digit starts when the analogue pointer is > 9.
|
||||
|
||||
Often the last digit "hangs" a bit on this devices and comes not over zero. So it is not easy to see which digit is correct. In the first example 4 or still 3? (3 is correct).
|
||||
|
||||
<img width="122" alt="image" src="https://user-images.githubusercontent.com/412645/209808192-5ff67e9f-ea7c-4d82-a8e4-54b3643c7e24.png">
|
||||
<img width="122" alt="image" src="https://user-images.githubusercontent.com/412645/209808306-359cce2e-ec84-4390-82d1-6747e1ec056c.png">
|
||||
|
||||
|
||||
## Early transition
|
||||
|
||||
Some units start the transition very early or run with the analogue pointer. In the third example, is it a 3 or a 2?
|
||||
|
||||
<img width="122" alt="image" src="https://user-images.githubusercontent.com/412645/209807685-658fb9bb-648a-4779-bc30-805eadc12083.png">
|
||||
<img width="122" alt="image" src="https://user-images.githubusercontent.com/412645/209808972-448bb6d0-7b7e-4030-abb2-9c966ceffc4a.png">
|
||||
<img width="122" alt="image" src="https://user-images.githubusercontent.com/412645/209809116-d4acc5f2-ab5c-4304-9559-598b1dfc59c2.png">
|
||||
|
||||
|
||||
## Inaccuracies in image recognition
|
||||
|
||||
The models for image recognition are good, but have inaccuracies in the range +/- 0.2. In order to obtain as many correct results as possible, a treatment is carried out in the post process in the range of 9.8-0.2 for the analogue pointer, which must start differently depending on the type of counter.
|
||||
|
||||
|
||||
## How to configure for my meter type
|
||||
|
||||
If you have a devices with "normal" transition you should not have any issues. On devices with "early" transition, you can set the option `AnalogDigitalTransitionStart` to a value between 6 and 8.
|
||||
|
||||
BIN
docs/img/0_arbitrary.jpg
Normal file
|
After Width: | Height: | Size: 1.5 KiB |
BIN
docs/img/3_arbitrary.jpg
Normal file
|
After Width: | Height: | Size: 1.7 KiB |
BIN
docs/img/ROI_drawing.jpg
Normal file
|
After Width: | Height: | Size: 42 KiB |
BIN
docs/img/ROI_example_settings.jpg
Normal file
|
After Width: | Height: | Size: 11 KiB |
BIN
docs/img/alignment_not_okay.jpg
Normal file
|
After Width: | Height: | Size: 14 KiB |
BIN
docs/img/alignment_okay.jpg
Normal file
|
After Width: | Height: | Size: 11 KiB |
BIN
docs/img/ana-cont/examp-ana1.jpg
Normal file
|
After Width: | Height: | Size: 1.2 KiB |
BIN
docs/img/ana-cont/examp-ana2.jpg
Normal file
|
After Width: | Height: | Size: 1.2 KiB |
BIN
docs/img/ana-cont/examp-ana3.jpg
Normal file
|
After Width: | Height: | Size: 1.3 KiB |
BIN
docs/img/ana-cont/examp-ana4.jpg
Normal file
|
After Width: | Height: | Size: 1.2 KiB |
BIN
docs/img/ana-cont/examp-ana5.jpg
Normal file
|
After Width: | Height: | Size: 1.2 KiB |
BIN
docs/img/ana-cont/examp-ana6.jpg
Normal file
|
After Width: | Height: | Size: 1.3 KiB |
BIN
docs/img/ana-cont/examp-ana7.jpg
Normal file
|
After Width: | Height: | Size: 1.4 KiB |
BIN
docs/img/ana-examp.jpg
Normal file
|
After Width: | Height: | Size: 1.6 KiB |
BIN
docs/img/bw_not_okay_big.jpg
Normal file
|
After Width: | Height: | Size: 12 KiB |
BIN
docs/img/bw_not_okay_small.jpg
Normal file
|
After Width: | Height: | Size: 8.2 KiB |
BIN
docs/img/bw_okay.jpg
Normal file
|
After Width: | Height: | Size: 7.5 KiB |
BIN
docs/img/correct_algo_1.jpg
Normal file
|
After Width: | Height: | Size: 69 KiB |
BIN
docs/img/correct_algo_2.jpg
Normal file
|
After Width: | Height: | Size: 49 KiB |
BIN
docs/img/correct_algo_3.jpg
Normal file
|
After Width: | Height: | Size: 40 KiB |
BIN
docs/img/correct_algo_zero_crossing.jpg
Normal file
|
After Width: | Height: | Size: 70 KiB |
BIN
docs/img/dig-class11/examp-dig1.jpg
Normal file
|
After Width: | Height: | Size: 687 B |
BIN
docs/img/dig-class11/examp-dig10.jpg
Normal file
|
After Width: | Height: | Size: 803 B |
BIN
docs/img/dig-class11/examp-dig11.jpg
Normal file
|
After Width: | Height: | Size: 1003 B |
BIN
docs/img/dig-class11/examp-dig12.jpg
Normal file
|
After Width: | Height: | Size: 794 B |
BIN
docs/img/dig-class11/examp-dig13.jpg
Normal file
|
After Width: | Height: | Size: 727 B |
BIN
docs/img/dig-class11/examp-dig2.jpg
Normal file
|
After Width: | Height: | Size: 871 B |
BIN
docs/img/dig-class11/examp-dig3.jpg
Normal file
|
After Width: | Height: | Size: 656 B |
BIN
docs/img/dig-class11/examp-dig4.jpg
Normal file
|
After Width: | Height: | Size: 832 B |
BIN
docs/img/dig-class11/examp-dig5.jpg
Normal file
|
After Width: | Height: | Size: 718 B |
BIN
docs/img/dig-class11/examp-dig6.jpg
Normal file
|
After Width: | Height: | Size: 643 B |
BIN
docs/img/dig-class11/examp-dig7.jpg
Normal file
|
After Width: | Height: | Size: 20 KiB |
BIN
docs/img/dig-class11/examp-dig8.jpg
Normal file
|
After Width: | Height: | Size: 820 B |
BIN
docs/img/dig-class11/examp-dig9.jpg
Normal file
|
After Width: | Height: | Size: 649 B |
BIN
docs/img/dig-cont/dig-cont_1.jpg
Normal file
|
After Width: | Height: | Size: 668 B |
BIN
docs/img/dig-cont/dig-cont_2a.jpg
Normal file
|
After Width: | Height: | Size: 660 B |
BIN
docs/img/dig-cont/dig-cont_2b.jpg
Normal file
|
After Width: | Height: | Size: 649 B |
BIN
docs/img/dig-cont/dig-cont_3a.jpg
Normal file
|
After Width: | Height: | Size: 894 B |
BIN
docs/img/dig-cont/dig-cont_3b.jpg
Normal file
|
After Width: | Height: | Size: 883 B |
BIN
docs/img/dig-cont/dig-cont_3c.jpg
Normal file
|
After Width: | Height: | Size: 819 B |
BIN
docs/img/enable_log_image.jpg
Normal file
|
After Width: | Height: | Size: 46 KiB |
BIN
docs/img/progammer_manual.jpg
Normal file
|
After Width: | Height: | Size: 84 KiB |
BIN
docs/img/wb_not_okay_big.jpg
Normal file
|
After Width: | Height: | Size: 6.2 KiB |
BIN
docs/img/wb_not_okay_small.jpg
Normal file
|
After Width: | Height: | Size: 4.0 KiB |
BIN
docs/img/wb_okay.jpg
Normal file
|
After Width: | Height: | Size: 4.1 KiB |
@@ -1,4 +0,0 @@
|
||||
# Welcome
|
||||
Welcome to the **AI on the Edge Device** Project Documentation!
|
||||
|
||||
...
|
||||
@@ -3,7 +3,7 @@
|
||||
|
||||
nav:
|
||||
# List all files in the expected order
|
||||
- index.md
|
||||
- Home.md
|
||||
|
||||
- Links:
|
||||
- Web Installer/Console: https://jomjol.github.io/AI-on-the-edge-device/index.html
|
||||
|
||||
@@ -14,3 +14,11 @@ plugins:
|
||||
filename: nav.yml
|
||||
|
||||
# The navigation is configured in the nav.yml file!
|
||||
|
||||
# Emoji support
|
||||
# See https://squidfunk.github.io/mkdocs-material/reference/icons-emojis/
|
||||
markdown_extensions:
|
||||
- attr_list
|
||||
- pymdownx.emoji:
|
||||
emoji_index: !!python/name:materialx.emoji.twemoji
|
||||
emoji_generator: !!python/name:materialx.emoji.to_svg
|
||||