Post 7 - Now, where have I been? - Part 1
Hello folks.
It’s been a while
since my last post – almost 6 months! But I’m here again with
another update, at last.
It’s been a busy
time (I think all my posts start like that, lol) but it really has. I
took on a contract job in February (got to pay the bills!) which has
me working at a clients 9-5 Monday – Friday, which has drastically
cut down my time available for developing the robots. But I’ve
managed to make some progress, none the less.
In this post I’m
going to describe the next step in my robot’s evolution –
constructing a map of its environment. What that means is that the
robot moves around as before, using an ultrasonic sensor to avoid
obstacles, but this time, as the robot moves around, it plots its
position, and the position of obstacles, on a map which can be
displayed on a PC screen. How it knows its position is by using
odometry and the magic of mathematics (my maths tutor was right, it
has come in useful after all! Lol). For anyone who isn’t sure what
odometry is, its a fancy name given to working out your speed and
distance travelled from the rotation of wheels, like the speed dial
and mileometer in a car.
To do this I’m
using a mobile robot with similar functionality to the one in my
earlier post. However, the one I’m using this time is based on the
Lego NXT hardware. There are a few of reasons for this, as follows :-
1) The motors for
the Lego NXT have a built in feature which gives pulses out as the
motor rotates. The NXT ‘brick’ counts these and your program can
read the counters to find out how much the wheel has turned. Knowing
this and the wheel diameter lets you calculate the distance the robot
has moved.
2) The NXT ‘brick’
has Bluetooth communications built in. This is good because I need to
send the data on position back to a PC to build up the map, and
display it so I can see it.
3) There is a Lego
programming environment available as a free download, which allows
you to code the ‘brick’ in C++ (or at least something very
close!). This means that when I’m happy with my code, I can
transfer it into an Arduino (have I said how much I love these
devices?) with minimal modification. It will also run on anything I
can get a C++ compiler for, e.g. a Raspberry Pi, so my code remains
portable, which is important to me.
Here’s some
photo’s of my NXT robot.
Photo
1 – Top view of NXT robot
In
photo 1, above, you can see the general layout of the robot, looking
down on it from above. The ‘brick’ is the block with the LCD
screen on it. The two ‘wheels’ you can see at the top of the
‘brick’ on either side of it are there to actuate bump switches,
but I’m not really using them in this robot. The actual drive
wheels can be seen at the bottom end of the brick, again one on
either side. Right at the bottom of the photo is the ultrasonic
sensor, sticking out at the front of the robot.
Photo
2 – Front view of NXT robot
In
photo 2 you can see the wheels and the ultrasonic sensor better.
Below the ultrasonic sensor is a light sensor, but again, that’s
not being used in this robot at the moment.
Photo
3 – Side view of NXT robot
Photo 3
just gives a side perspective. You can just make out at the back a
roller bearing. Originally the robot had tracks, but they were giving
trouble when turning, so I swapped them for wheels. However, that
didn’t cure all of my problems, so after experimenting with various
castor configurations, I settled on the roller bearing as giving the
best results. It’s one of those issues that you don’t think about
until you build a physical robot – I wouldn’t have picked up
those difficulties using a simulator!
With
the physical built, the code from the original mobile robot was
loaded into the Lego compiler and modified to use Lego commands for
the motors and Ultrasonic sensor. It was then tested to ensure that
the robot performed in the same way as the original, which it did.
Hooray!! That was the first hurdle successfully over. Now came the
tricky bits, first calculating where the robot thinks it is, and
secondly using the Bluetooth to transmit that back to a PC.
Tackling
the Bluetooth communications first, as without that working I would
have to abandon using the Lego robot for the development, I spent
some time going through the documentation on the Lego NXT Bluetooth.
There are two manuals available, and both need to be read to
understand how to drive the comms. Once I thought I had a grasp of
that, I moved over to putting some code together on the PC, in
Python, to receive information from the Lego robot. After some
learning on how to set up Bluetooth comms in Python I was able to
establish a link and send across some data from the Lego robot to a
PC. Fantastic – now what I needed was some actual data to send, in
place of the dummy packets I had been using for testing, so on to the
odometry!
I
decided to start off by developing the odometry just using one wheel
to prove the principle, as that simplified the maths and coding
substantially. Fundamentally, while the robot is moving, the count of
pulses from the wheels increases. So by taking the count of pulses,
and knowing how many pulses we get for one revolution of the wheel,
and knowing the wheel diameter, we can calculate how far the robot
has moved. By reading the counts at frequent intervals and performing
the calculations, we can get the amount of movement since the last
time we performed the calculation, and by adding these small
incremental movements together, we get the running total of the
distance travelled. If we move in reverse e.g. when we get close to
an obstacle, then we can subtract the incremental movements while
reversing from our total distance travelled. That just leaves the
tricky bit of turning. For this, I decided to simplify things as much
as possible by performing a ‘spin’, where I drive one wheel
forward and the other wheel in reverse, making the robot turn on its
own centre axis. As this doesn’t move the robot forwards or
backwards, then it doesn’t affect the distance travelled, just the
direction that the robot is facing, and that can be worked out with
some trigonometry (thanks again to my maths tutor for persevering!
lol).
So,
now, from a starting point, I can work out how far the robot has
travelled and the direction it is moving in, as it moves around. This
is, of course, relative to the robot’s starting point, as it has no
other frame of reference yet i.e. a compass or GPS module (these will
come later – much later, probably, lol). For the robot’s position
I decided to calculate this in the robot and transmit the data as X
and Y coordinates for the position, and a direction as an angle in
degrees which the robot is facing.
Great,
I have position information calculated in the robot, and I can
transmit this over Bluetooth to a PC, where I can read the data in
Python. Now what I needed was a map which can be populated with the
information as the robot moves around, and which can be displayed on
a PC screen so that I can see where the robot has been.
As I’m
developing in Python, I used Numpy to create a 2 dimensional array to
hold the map. I had worked out that with the wheels I was using and
the number of counts per revolution, I have a resolution of better
than 1mm per pulse, so I chose to scale the map so that each element
would be a 10mm square. To display the map on a screen, each square
could then be 1 pixel, so an 800 by 600 pixel screen display would
correspond to 800 x 600 cm, or 8m x 6m, which I figured would be a
reasonable size for a room.
It was
then fairly easy to take the robot position as X & Y coordinates
and to scale it to plot a point in the array so that it appeared at
the correct position. I then used Pygame to just copy the array to
the PC screen, as this was very easy to do, and also gives options of
changing the colours of pixels etc.
It was
then time to try it out! I constructed a small ‘play pen’ area in
my office, using boxes to form a bounding area for the robot to move
around in. This can be seen in the photo below.
Photo
4 – View of NXT robot in the Play Pen area
It was
then time to run the code. I had already run sections of it to test
it, so I was confident that I would get some results, but I wasn’t
sure how it would all work in reality. |The video below is just a
short clip of the robot performing some movement and turns. As you
can see, it just looks the same as the original mobile robot video
from Post 2, which is to be expected as the underlying code is the
same. However, it’s good to confirm that the code can be
transferred between the Arduino and the Lego NXT and, by inference,
back again. Result!
Video
1 – Short clip of robot performing some movement in the Play Pen
The
above video shows the robot moving forward, detecting an obstacle
(the cupboard) and taking action to avoid it. As it turns, it keeps
picking up obstacles, in the form of the cupboard or the boxes
bounding the ‘play pen’ on the right of the video, so it repeats
the turn several times, until it detects clear space ahead. It then
travels in a straight line once more.
Next I
ran the robot again with the Python mapping code running on my PC.
Lo! and Behold! As I watched a map began to form before my eyes!
Success – until the software crashed after a short while, lol. But
at least it proved the principal. After several modifications to the
codes, both the C++ in the robot, and the Python mapper, I got the
results below.
Photo
5 – Screenshot of Map after the first few turns
In
Photo 5, you can see the trace of where the robot has been, which is
the white line. The robot started off as in the video, so it was
heading straight up, then you can see how it turned as it avoided the
obstacles. The black pixels are the position of the obstacles it
detected.
Photo
6 – Screenshot of Map after running for a while
Photo 6
shows the map after the robot has been running for a while. The first
part is the same as in Photo 5, but in this map the robot travelled
‘down’ to the bottom edge of the play pen. It then turned to its
right (our left on the map) and continued on until it came in range
of the wall, then it turned again into clear space. Again, the black
pixels are the detected obstacles. Photo 7 provides a closer view of
the trace.
Photo
7 – Closer view of the trace from the map in Photo 6
From
Photos 6 & 7 you can see a couple of problems. The first is that on the
big turns, there is a section of the trace missing. Secondly, the
plots of the obstacle positions is not correct – it’s showing
obstacles in the clear space it has moved through! Also, while the
robot is moving forward, it has a slight drift to the right (its
right) which isn’t picked up by the software. Having said that, I
am pleased with the results, as it demonstrates that the principles
are correct, it’s just my maths that’s wrong – now where’s my
maths tutor, lol.
So,
having successfully demonstrated the principle, it’s time to
consider the next stage. Firstly, I think I need to use the pulse
count from both wheels to calculate the robot position. That should
give a more accurate result and will take account of the drift.
Secondly, I need to look at those gaps at the turns to find out what
caused them. Finally, I need to look at the obstacle position
calculation and correct that. That should keep me busy for a while,
so, until next time,
That’s
all folks ...
Comments
Post a Comment