Just to fill in the blank, progress is a bit slow, and I’m gathering hardware for the lawnmower prototype, meanwhile preparing for the ACT.
Recent Updates Toggle Comment Threads | Keyboard Shortcuts
So, as we all know, the satellite map from google maps is not exactly high resolution once you zoom to the scale of buildings, so my machine learning program doesn’t really have enough pixels to work with. Therefore, I simply wrote a filter program that makes anything green “lawn”. I must say that the program cannot distinguish between trees and lawns, but hey, at least we can obtain a basic map from google maps. A little step forward. I am working to make a plan to build the working prototype of part A – the robotics team of Gunn High School does not have extra robots to lend me, so I’ll have to build my own in one way or another. I don’t have any teammates to help with this project right now, and I’m kind of in a hard situation. Will storm fry’s and instructables.com for solutions.
as you can see, trees are recognized as lawns…… But at least we have something to work with.
As I said before, I plan to use the google maps API to eliminate the need for image stitching and obtain top down images of lawns. Today I read documentation for the API and tested in Python – which I just learned.
This is the map of Gunn High School, accessed through API. Resolution is not very high (it can be higher as this is not the biggest zoom, but that way the picture is still not very clear), but it is a promising way to obtain photos.
I’m amazed by how simple the code for this is, as python is reputed to be concise.
The photo of a location can be obtained simply by entering the address into the code.
The heralding photons are the first wave of photons to enter the instruments, and there are about 210 heralding photons in total. As the paper explains, the number of heralding photons must be small enough to ensure that the phase broadening of the atomic state is not severe – which means that the heralded state is rather pure – and must be big enough to make the probability of successfully generating the heralded state high enough. With 210 total photons in the heralding photon wave, the probability of detecting the successful generation of a heralded state is 1.5%.
If you wonder what the “phase broadening of the atomic state” is, what is “rather pure heralded state”, and rather specifically why a large number of heralding photons results in a larger detection probability (although a large number and a large probability does fit together, right?), I very embarrassingly don’t know – I am a member of the quantum physics club of Gunn High School and we’re trying to figure out the paper. I try to make sure everything posted here is correct, and I’ll come back to edit this post when I find out more.
The original plan for generating the lawnmower map was to use drones to take photos of the lawns and stitch multiple photos together, (this plan was just posted yesterday) but now there is no need. Because, GOOGLE Maps has top down photos of …… the entire US, pre-stitched. So as long as using google maps for the lawnmower is allowed (I’ll check the copyrights), there should be no need for image stitching.
As listed in the development plan, part A.2 includes a “map making” part, which generates a map of the lawn (a matrix with each element representing a lawn tile). (The map should of course be to scale). Today I did some research on how this can be accomplished and here are two methods.
1.Let the lawnmower go along the border of the lawn for a few times, and collect the speed and heading of the lawnmower every 0.1 seconds or so. We then use this data to build a rough, polygon shaped map of the lawn. (With each edge of the polygon the distance the lawnmower traveled in 0.1 seconds.) This map is of course rather rough, but the more laps the lawnmower runs around the lawn, the better the accuracy.
2.Let a drone fly to the lawn’s location, take a photo of the lawn, and use the lawn recognition program in A.1 on the photo to obtain the map. If the lawn in question is very big, we may have to take multiple photos covering different parts of the lawn, utilize image stitching technology to literally “stitch” the photos together, and then run lawn recognition to get the map. The altitude of the drone can be used to scale the map. (the higher the altitude, the smaller objects on the ground seem to be, right?)
The “image stitching” technology mentioned in method 2 is the technology used to stitch photos from phone cameras to form panorama pictures. I had to do some searching to find this term, as searching “image reconstruction” will only turn up methods used in making 3D images out of multiple 2D ones, which is very useful in situations such as CT scans.
The two methods mentioned above is of course best used in conjunction. More specifically, I plan to generate a precise map with method 2 and then confirm it with method 1, as method 1 produces a rough map that is unlikely to be “wrong” although not very useful, and method 2 may produce an entirely wrong map due to interference from other green plants.
The A.3.B section determines the optimal direction to mow a section of a lawn in, and I use PCA to do so.
A straightforward example:
In the picture above you can see that if we mow in the vertical direction, we would have to make many turns and therefore it is very inefficient. However, after displaying the data in a coordinate that has vectors [-0.7,0.7] and [0.7,0.7] as axes, (this is essentially rotating the original picture) we get the following picture, which is much better to mow from vertically. In reality, we simply computer the arctan of primary eigenvector [-0.7,0.7] to get 135 degrees or 315 degrees. This means that the lawnmower can be very efficient if it mows in the 135 degrees or 315 degrees direction (we’ll presume that 0 degrees is north), since doing so is equivalent as mowing in the vertical direction in the second picture.
The original plan for the lawnmower project was published about a month ago, and when writing individual parts of the program, I found that some different methods are more convenient and deviated from the original plan. The original plan and the new plan are compared here. The changes in the new version are highlighted blue.
A.1.Being able to distinguish lawn from boundary or flower. Input:photo from camera feed that is segregated into 32*32 squares. Although the 32*32 block is rather large and the initial matrix of 0’s 1’s and 2’s may not be enough to plot a precise path, each 32*32 block is going to contain less information therefore become more precise as the lawnmower approaches the boundary. Output: a matrix consisting of 0’s,1’s and 2’s. Each number represents a 32*32 block, 0 means the block only contains lawn, 1 means the block contains lawn and non-lawn, 2 indicates completely non-lawn. *32*32means 32*32 pixel
A.2.Planning a path according to A.1. Note that the route should be rather straight in order to make sure the lawn looks good when finished. The route should of course not Input:the matrix from A.1 Output: a path represented by a matrix labeled by numbers 1-n. n is the number of positive elements in the matrix. The places not in the path are labeled 0.
A.3.Following the path from A.2 correctly. This part of the program should correct any deviations from the desired course, keeping the lawnmower on track. Input: the matrix from A.2 Output:signals to the drivetrain.
B.1.Recognizing the objects around the lawnmower. The decisions for fast moving cars should have a high recall. (high recalls means false negative rarely occurs) Input: two consecutive photos from a camera feed. This means that the speed of objects around the lawnmower can be recognized. AND the rough direction the lawnmower is moving towards. Output: mark all the signs using different preassigned numbers (to be worked on)
B.2.Making a route: mark out the route with a matrix similar to A.2, or command the lawnmower to stop because of traffic
B.3.Follwing the route. Similar to A.3
Part A-Mowing a single lawn
1.The lawnmower is able to recognize the places that are lawns to avoid crashing into objects or mowing down flowers. This can also recognize the border of lawns. The algorithm is a basic 3-layer neural network with a total of 3073 input features: each example is a 32*32 pixel block with RGB coloring, with 1 additional feature – the bias term – and the algorithm decides whether this picture block contains a piece of lawn or not.
2.The lawnmower generates the map of a lawn when mowing it the first time by going along the border of the lawn repeatedly.
3.The lawnmower generates a desired path of the lawn. This step includes two small steps:
A.Separating the map of the lawn into pieces of suitable size (as the lawnmower may be deployed on very large golf courses)
B.Determining the mowing direction for each section of the map (doing only 180 degree turns and as few of these turns as possible on each section ensures speed) using PCA.
4.The lawn mows each section of the lawn in the direction found in A.3.B, doing only 180 degree turns.
Part B-Going from one lawn to another
1.Recognize traffic signs using an open source library.
2.Compile traffic rules into a file understandable by the lawnmower
3.Integrate GPS, compass, and other things that enable guidance
Current progress: Part A is done except for the map generation (A.1) and the first small part of A.3, the map segmentation part. I will finish these parts, rewrite some parts of code in C++, and build a prototype ASAP. More changes to the plan are to be expected as I make the prototype and work on part B – the really defining part of this project!
Quantum: the [very basic] procedures of the “Entanglement with negative Wigner function of almost 3,000 atoms heralded by one photon” paper
Today I’m going to explain the very basic procedures of the experiment in the paper “Entanglement with negative Wigner function of almost 3,000 atoms heralded by one photon”, and explain some basic keywords included.
steps overview [this is very rough and extremely simplified!] This will be explained step by step in greater detail. If you don’t understand something, don’t worry.
Photo from http://www.nature.com/nature/journal/v519/n7544/full/nature14293.html
Cavity mirrors: creates resonance with the photons. Photons (vertically polarized) enter from the left, and enter the space between the cavity mirrors through the left mirror – mirrors are not perfect and some photons can go through. 87Rb – Rubidium – atoms (these are the ones the experiment entangles with each other), cooled to 50 microkelvin (this is extremely close to absolute zero) are placed between the mirrors.
Polarizing beamsplitter: the photons interact with the entangled atoms and their polarization changes – by a random angle – and they go into the polarizing beamsplitter. The beamsplitter separates vertically polarized photons and horizontally polarized photons. Guess what happens to photons that are neither completely vertically nor completely horizontally polarized? They have a probability of being separated to the vertical side or the horizontal side, and the more vertical they are, the more likely they are going to be separated to the vertical side. This is very important, and directly links to the previous post explaining the probability-based characteristics of QM. Previous post: https://victorzhaoblog.wordpress.com/2017/03/13/qm-notes-post-1/
Single photon detectors: there are two single photon detectors, one detecting photons on the vertical beam and on detecting photons on the horizontal beam. The detection of a horizontally polarized photon means a LOT, and is linked directly to the title of the paper.
2.Send in the “heralding” photons
This is the first of two waves of photons to go in the apparatus. This wave essentially makes the atoms entangle. A total of about 210 photons enter the apparatus to “herald” – essentially to both generate and confirm the entanglement. More on the number 210 later.
3.Send in the “measuring” photons
This is the second wave of photons. A total of 17 thousand photons are sent in to measure the results. More on the choice of this amount later.
4.Analyse the results!
Again, these are only extremely simplified steps, and I will continue to explain them one by one in greater depth later.
Quantum update 2-basics on the paper “Entanglement with negative Wigner function of almost 3,000 atoms heralded by one photon”
The quantum mechanics on this blog is meant to explain the paper “Entanglement with negative Wigner function of almost 3,000 atoms heralded by one photon”. The previous post on QM addressed some fundamental differences between classical mechanics and QM, and QM knowledge needed to understand the paper will continue to be uploaded. The paper itself cannot be uploaded due to copyright reasons, however I will summarize the paper with my own words and explain the methods and findings of the experiment.