Skip to content
Snippets Groups Projects
Commit b0625933 authored by Zhiying Li's avatar Zhiying Li
Browse files

Upload New File

parent 774a9a1e
Branches
No related merge requests found
# Review of Key Aspects of the System (10/01/2021)
Meeting with Zhaoliang (10/01/2021), Recorded by Zhiying (10/04/2021)
<img src="README_Graphs/0_cover.png" alt="0_cover" style="width: 75%;" />
## Current Hardware Setups
<img src="README_Graphs/1_current_setup.png" alt="1_current_setup" style="width: 75%;" />
* There are 2 main module of the system
* **Camera**: Detect the target (green bolb).
* **Hardware**: OpenMV Cam H7
* **Code**: [`openMV_main.py`](https://git.uclalemur.com/shahrulkamil98/blimp-autonomy/-/blob/main/Codes/OpenMV H7 Plus Camera/openMV_main.py)
* **Control Board**: Based on Inputs from Camera (Detect / NOT Detect), control motors (Catch / Seeking)
* **Hardware**: Feather ESP 32
* **Code**: [`feather_main_personaldemo_PID.ino`](https://git.uclalemur.com/shahrulkamil98/november-2021-blimp-competition/-/blob/main/Code/Main Feather Code/feather_main_personaldemo_PID.ino)
* OpenMV camera (P4, P5) and control board (SCL, SDA) are connected throught I2C protocal.
* Laptop connects to OpenMV Camera and Control board (Update the excuation codes)
* Battery * 3 for control board (1500 mAh), ControlBoard also mount to the Motor Shield
* **High Level Logics** (Current):
```pseudocode
If [Camera] Detect Target --> (x,y):
[Control_Board] do PID control
until move towards (x,y)
Else [Camera] NOT Detect
[Control_Board] Seeking:
Horizontally Rotating
Vertically No Action
```
## (Color) Detection Module
In `feather_main_personaldemo_PID.ino`, in the `loop{...}`, the "If Detect/Not Detect Parts" has an condition called **`cam.exe_color_detection_biggestblob()`**
```c
...
if(cam.exe_color_detection_biggestblob(threshold[0], threshold[1], threshold[2], threshold[3], threshold[4], threshold[5], x, y)){...}
else {...}//seeking algorithm
...
```
This function **`cam.exe_color_detection_biggestblob()`** can be seen in the file `"Camera.cpp"`, in which , there is
```c
bool Camera::exe_color_detection_biggestblob(int8_t l_min, int8_t l_max, int8_t a_min, int8_t a_max, int8_t b_min, int8_t b_max, int& x, int&y){
int8_t color_thresholds[6] = {l_min, l_max, a_min, a_max, b_min, b_max};
struct { uint16_t cx, cy; } color_detection_result;
if (interface->call(F("color_detection_single"), color_thresholds, sizeof(color_thresholds), &color_detection_result, sizeof(color_detection_result))) {
}
x = color_detection_result.cx;
y = color_detection_result.cy;
if (x == 0 && y == 0){
return false;
} else {
return true;
}
}
```
It use the `interface->call()` of OpenMV MicroPython [`rpc` library](https://docs.openmv.io/library/omv.rpc.html) function to call the function `color_detection_single` in **Shahrul Kamil bin Hassan / blimp-autonomy / Codes / OpenMV H7 Plus Camera /[openMV_main.py](https://git.uclalemur.com/shahrulkamil98/blimp-autonomy/-/blob/main/Codes/OpenMV H7 Plus Camera/openMV_main.py)**
```python
# When called returns the x/y centroid of the largest blob
# within the OpenMV Cam's field-of-view.
#
# data is the 6-byte color tracking threshold tuple of L_MIN, L_MAX, A_MIN, A_MAX, B_MIN, B_MAX.
def color_detection_single(data):
red_led.off()
green_led.on()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
thresholds = struct.unpack("<bbbbbb", data)
print(thresholds)
blobs = sensor.snapshot().find_blobs([thresholds],
pixels_threshold=500,
area_threshold=500,
merge=True,
margin=20)
if not blobs: return struct.pack("<HH", 0, 0)# No detections.
for b in blobs:
sensor.get_fb().draw_rectangle(b.rect(), color = (255, 0, 0))
sensor.get_fb().draw_cross(b.cx(), b.cy(), color = (0, 255, 0))
out_blob = max(blobs, key = lambda b: b.density())
red_led.on()
green_led.off()
return struct.pack("<HH", out_blob.cx(), out_blob.cy())
```
There is also another funciton called `color_detection (Thresholds)`, which is a complex version of `color_detection_single()`, and it will be able to detect multiple balls.
Now, we take deeper look into the **`color_detection_single ()`** function,
* **Input**: 6 `int8_t` number for **thershold**
* Threshold is in [LAB format for color](http://shutha.org/node/851)
* All the way from `feather_main_personaldemo.ino`, as
* 6-byte color tracking threshold tuple of `L_MIN, L_MAX, A_MIN, A_MAX, B_MIN, B_MAX`
* Used as a cirteria for [`img.find_blobs()`](https://docs.openmv.io/library/omv.image.html#image.image.Image.image.find_blobs) finding a blob
* `get_fb()`: (Get Frame Buffer) Returns the image object returned by a previous call of `sensor.snapshot()`
* `out_blob`: is the max bolob by density
* **Output**: Return the center of the blob `(out_blob.cx, out_blob.cy)`
**Note**:
* Currently, the threshold is manually set to be suitable for detecting green inside the experimenting space.
* We will upgrade this to ML procdess to have a accurate
## PID Control Module
### PID Prelim
$$
\begin{aligned}
&u(t)=K_{p} e(t)+K_{i} \int e(t) d t+K_{p} \frac{d e}{d t} \\
&u(t)=\text { PID control variable } \\
&K_{p}=\text { proportional gain } \\
&e(t)=\text { error value } \\
&K_{i}=\text { integral gain } \\
&d e \quad=\text { change in error value } \\
&d t=\text { change in time }
\end{aligned}
$$
In discrete domain:
* Integration as summaton
* Derviative as difference
### Feather PID
**Shahrul Kamil bin Hassan / november-2021-blimp-competition / Code / Main Feather Code / [feather_main_personaldemo_PID.ino](https://git.uclalemur.com/shahrulkamil98/november-2021-blimp-competition/-/blob/main/Code/Main Feather Code/feather_main_personaldemo_PID.ino)**
The setup for PID is
```C
//Specify the links and initial tuning parameters
double Kpx=2, Kix=0.05, Kdx=0.25;
double Kpy=2, Kiy=0.1, Kdy=0.25;
PID PID_x(&Inputx, &Outputx, &Setpointx, Kpx, Kix, Kdx, DIRECT);
PID PID_y(&Inputy, &Outputy, &Setpointy, Kpy, Kiy, Kdy, DIRECT);
...
PID_y.SetOutputLimits(-255, 255); //up positive
PID_x.SetOutputLimits(-255, 255); //left positive
```
PID is an object class in the library: [PID_v1](https://github.com/br3ttb/Arduino-PID-Library/blob/master/PID_v1.cpp)
In the Loop(), for every detection, the returned value of `x,y` as the center of the `out_blob.cx` and `out_blob.cy`
```c
int x = 0;
int y = 0;
panel.singleLED(DEBUG_STATE, 10, 0, 0); //standby
if(cam.exe_color_detection_biggestblob(threshold[0], threshold[1], threshold[2], threshold[3], threshold[4], threshold[5], x, y)){
if (displayTracking > 0){
displayTrackedObject(x, y, RESOLUTION_W, RESOLUTION_H); //THIS NEEDS WORK
}
panel.singleLED(DEBUG_STATE, 0, 10, 0);
Serial.println("blob detected");
Serial.print("x value: ");
Serial.println(x);
Serial.print("y value: ");
Serial.println(y);
Inputx = x/1.00;
Inputy = y/1.00;
PID_x.Compute(); // <==== LOOK HERE
PID_y.Compute(); // <==== LOOK HERE
Serial.println(Outputy);
Serial.println(Outputx);
//actuate the vertical motor
moveVertical(Outputy);
moveHorizontal(Outputx, BASE_SPEED);
} else { //seeking algorithm
//panel.resetPanel();
panel.singleLED(DEBUG_STATE, 10, 10, 10);
Serial.println("seeking...");
int zero = 0;
moveVertical(zero);
moveHorizontal(SEEKING_SPEED, zero);
}
}
```
`PID_x.Compute()` and `PID_y.Compute()` take the input of `InputX` `InputY` and output `OutputX` `OutputY`
`OutputX` `OutputY` are in between -255 to 255
And pass into the `moveVertical(Outputy)` and`moveHorizontal(Outputx, BASE_SPEED)` to tune the speed
```c
void moveVertical(int vel){
if (vel > 0) { //up
panel.singleLED(DEBUG_VERTICALSPEED, abs(vel), 0, 0);
motorVertical->setSpeed(abs((int) vel));
motorVertical->run(BACKWARD);
} else if(vel < 0) { //down
panel.singleLED(DEBUG_VERTICALSPEED, 0, 0, abs(vel));
motorVertical->setSpeed(abs((int) Outputy));
motorVertical->run(FORWARD);
} else {
panel.singleLED(DEBUG_VERTICALSPEED, 0, 0, 0);
motorVertical->setSpeed(0);
}
}
void moveHorizontal(int vel_hori,int base_speed){
int lspeed = -1*vel_hori + base_speed;
int rspeed = vel_hori + base_speed;
if (rspeed > 0){
motorLeft->run(BACKWARD);
} else {
motorLeft->run(FORWARD);
}
if (lspeed > 0){
motorRight->run(BACKWARD);
} else {
motorRight->run(FORWARD);
}
displaySpeed(lspeed, rspeed);
motorLeft->setSpeed(min(MAX_SPEED, abs(rspeed)));
motorRight->setSpeed(min(MAX_SPEED, abs(lspeed)));
}
```
* For vertical movement,
* -255: DOWN
* 0: Stand Still
* 255: UP
* For horizontal movement, there is a base speed.
* (Suppose BACKWARD really mean backward in the function)
* -255: left = 255+base_speed > 0 ; right = -255+base_speed < 0 (lspeed > 0 rotation to right)
* 0: left = base_speed ; right = base_speed (moving forward without rotation)
* 255: left = -255+base_speed <0 ; right = 255+base_speed > 0 (rspeed > p rotation to left)
## Aspects of Improvement
***Blue text on the board***
1. Replace camera with more sensor
* 2 camera
* IMU
* IR (to determine weather has catched a ball or not)
* Lidar
* Baraometer
2. Color Detection --> upgrade to use ML
3. If Dectect, return a varable `Area` (as the area of the blob in screen shot) in addition to `x` and `y`.
4. PID constant tuning
5. Dynamic Mapping
* Base Speed (Dynamically maps to `Area`)
* Motor (Clockwise/Counter Clockwise) automatically maps to the CORRECT definition of BACKWARD and FORWARD
6. Seeking: Add Vertial Actions
## Long Term System Setups
**High Level Logics** (Current):
```pseudocode
If [Camera1] Detect
1. Detect AprilTag / Hoop:
[Control_Board] Distinguish Directions
2. Detect Green Bolloon:
if [IR Sensor] Detects Catch
[Camera2] Seeking / Detect AprilTag or Hoop
Goal: Win Points
Else [IR Sensor] Detect NOT Catch
[Control_Board] do PID control
until Catch The Green Bolloon
Else [Camera1] NOT Detect
[Control_Board] Seeking:
Horizontally Rotating
Vertically Moving
```
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment