Hey guys! I was so thrilled when this six-legged fella arrived that I even forgot to take a photo. Now I'm a little bit bummed out about losing the opportunity to record this precious moment.
With the unboxing excitement, I quickly downloaded the app and played with HEXA for a while. Its action was so comical, and I kept watching it for god knows how long. But the watching alone could not satisfy me, so I started wondering what else it could do.
After about 15 minutes of thinking, I still didn't have a clue. That was when I changed my mindset — Maybe I shouldn't pay too much attention to what HEXA could do but focus on what I could and wanted to do.
Since I had done some research on computer vision, I soon decided to use my hands-on knowledge on HEXA. Technology aside, I also have a passion for bullfighting. Yes, I am a programmer with a heart of a matador! Unfortunately, I don't have the body for that. But I wouldn't let that stop me anymore, because HEXA just became my bull! I could get all the fun of watching bullfighting without exposing myself to any danger.
Then I went on to check the documentations and the SDK on the website, and I must say Vincross has done a very good job on that. With all the information I needed, I was quickly able to get my hands on the little creature.
To transform HEXA into a bull, I must:
- Educate its image recognition system to recognize and find the color RED;
- Educate HEXA to recognize RED and chase after it.
Pseudo-code:
- Rotate the head step by step and take a pictures at each step.
- Determine if there is a RED area in these images.
- If yes, stop rotating and run towards the RED area.
- If no, keep rotating the head.
- Determine if there is a RED area when running. If yes, keep running and do some redirecting.
- If the RED area disappears when running, stop running and rotating the head to seek for RED.
Three goroutines were used to control the head rotation, the motion, and to detect the RED area separately. The determination statements of whether there was a RED area in visual field as well as the scripts of the head redirection were placed together in the head rotation part. Then I mapped these goroutines into several status. As a hacking demo, it’s not beautiful. I highly recommend you guys refer to the examples provided on the website. They're very helpful.
Here are the three blocks I mentioned. Any suggestion is welcome.
1. Head rotation
The interface to control HEXA’s body is provided by hexabody package. Notice that both Start() and Stop() are essential to initialize and reset the posture of HEXA. As for the head rotation, two parameters need to be set in MoveHead(): degree for the angle and duration for the time. I want to mention particularly that the degree refers to the current position rather than a fix point. Furthermore, the default rotate direction is clockwise. As for the meaning of duration, it indicates the rotation speed for sure, but unfortunately I couldn't find the maximum value in the documentation.
func (skill *ScanRed) searchRed() {
for skill.status {
if skill.round {
direction := hexabody.Direction()
direction += 30
hexabody.MoveHead(direction, 200)
skill.checkRedLightDistrict()
time.Sleep(time.Millisecond * 100)
}
time.Sleep(time.Millisecond * 200)
}
}
2. Detect RED
This module makes use of the media part of interaction. Also both Start() and Stop() need to be invoked. In this part, I use SnapshotRGBA() to get one RGBA image through HEXA’s camera. To decrease the processing burden, only the central clipping of the image is analyzed in RED detection statements. I average all the pixels' RGB values to see whether they are beyond or under the threshold. This algorithm is quite unwieldy, but anyway it works as a compromise. By the way, image package by Golang is called in this part. Here's the script for this part.
func isRed() bool {
thresHold := 200
subRed := 0
srcImg := media.SnapshotRGBA()
srcBounds := srcImg.Bounds()
m := image.NewRGBA(srcBounds)
ptX := (srcBounds.Size().X * 1) / 10
ptY := (srcBounds.Size().Y * 1) / 10
draw.Draw(m, srcImg.Bounds(), srcImg, image.Pt(ptX, ptY), draw.Src)
subBounds := image.Rect(srcBounds.Min.X/2, srcBounds.Min.Y/2, srcBounds.Max.X/2, srcBounds.Max.Y/2)
newImg := m.SubImage(subBounds)
width := newImg.Bounds().Size().X
height := newImg.Bounds().Size().Y
for w := 0; w < width; w++ {
for h := 0; h < height; h++ {
r, g, b, _ := newImg.At(w, h).RGBA()
r = r >> 8
g = g >> 8
b = b >> 8
c := (int(r) - int(g)) + (int(r) - int(b))
if c > thresHold {
subRed++
}
}
}
log.Info.Printf("%d %d", subRed, width*height)
if subRed > (width*height)/200 {
return true
}
return false
}
3. Run
HEXA will stop rotating once it detects a RED area and then go towards this area. In this part, I use the WalkContinuously() provided in hexabody to walk. In that function, direction is the only parameter you need to make a change. Moving forward one frame is provided, however, it doesn't tell me how far one frame is. Here's the script for this part.
func (skill *ScanRed) goToRed() {
for skill.status {
if skill.run {
log.Info.Printf("RUN...")
hexabody.Walk(hexabody.Direction(), 100)
}
else {
time.Sleep(time.Millisecond * 200)
}
}
}
This skill's development took me about one day, during which much time was spent on getting to know the documentations and creating the image processing algorithm. And I would love to say again that the SDK is remarkably complete and easy to read.
Now I've made my HEXA into a bull. I mean kind of.
To me, it's more cute than fierce. I will polish the codes and update the post. Perhaps I will also create some new stuff for HEXA on weekends.
To Vincross Team
And if any Vincross staff is still reading this, these are for you:
1. Why can't I adjust the camera’s height? Imaginary space would be greater if I got a flexible visual system.
2. Voice interaction would be cool. Will you support both built-in mic and the audio module.
3. The distance sensor module is essential for kinematics, so please consider supporting it.
4. No realtime battery usage? That’s ridiculous!
Cheers,