Thursday, December 20, 2012

Happy holidays!

As the holiday season approaches once more, it is time again for the publisher, editors and hard working sales representatives at Vision Systems Design to take a well deserved break from our labors until the New Year comes sneaking around.

Personally, I'm looking forward to these holidays and the ritual of waking up eagerly on Christmas morning to see what delightful pieces of high-technology equipment that Papa Noël may have deposited beneath the resplendent $40 Christmas tree which stands proudly erect in my living room.

Perhaps this year, I might be elated to find that one of my generous friends, relatives or work colleagues has brought me a new camera or piece of video equipment with which I can deploy to capture the thrilling magic of the holidays. Before doing so, however, you can be sure that I will look up the particular specification of the imagers used in them to discover exactly what line pairs per millimeter that they can resolve!

But even if I don't receive any new high-technology items this year, I can be almost one hundred per cent certain that someone will have bought me a new pair of cotton socks and some underwear.

Any fathers amongst our readership will recognize the importance of such presents, since hardworking engineers and editors like me rarely have time to buy such items of loathing (Surely, clothing? -- Ed.) during the course of the year!

What ever presents I receive, one thing is for sure. Come Christmas day, I will be sitting down with friends and family to tuck into a nicely stuffed bird, accompanied by a plethora of roast potatoes, sprouts and cranberry sauce. Not to mention the canapés, aperitifs and petit fours that I plan to wash down with some mulled wine.

Having satiated myself, I may well take a little rest on the couch to watch the endless variety of intriguing variety shows that will undoubtedly be broadcast on television this holiday season.

It might sound like a quiet time for some, but for me, it's a great respite from the hurly-burly world of publishing that consumes so much of my existence during the rest of the year.

Enough said. On behalf of the entire team here at Vision Systems Design, I would like to thank those folks in the industry that have supported our efforts over the past year through advertising, as well as those that have taken the time and trouble to discuss the vision systems that they have developed with our editors for the benefit of our readers.

I sincerely hope that next year will bring you and your families, as well as the companies that you work for the good fortune and prosperity that you deserve. Have a wonderful holiday season. I hope your bird tastes as good as mine.

Tuesday, December 18, 2012

Balloon vision

Anyone who has ever attended the birthday party of a small child will know that at some point in the proceedings an event will occur that inevitably upsets one of more of the children present.

And so it was when Dave Wilson, our beleaguered European Editor visited the small sleepy hamlet of Steeple Claydon in Buckinghamshire, England last weekend to attend the birthday party of the daughter of a close friend.

Towards the end of the party, the birthday girl's helium filled balloon was let loose from its moorings by another child at the party, only to ascend to the 50ft high ceiling of the seventeenth century village hall where the party was being held.

Seeing the lack of disappointment on the child's face, our European editor strode off into the village hall kitchen to ask if any of the adults present might know of any innovative means by which the rogue balloon could be brought down from its lofty heights.

Sadly, most of them shook their heads in despair. One suggested waiting until the balloon deflated. Another suggested hiring a very long ladder. No-one was able to offer any practical suggestions to the conundrum at all.

Upon returning to the main hall, our European Editor was somewhat taken aback to discover that the balloon in question was lying on the floor in the hands of the father of the child, who proceeded to attach a weight to it before passing it back to his daughter.

Needless to say, our European Editor was absolutely intrigued to discover how such a feat had been accomplished and approached the father to discuss the means that he had employed to retrieve the balloon from such a height.

Well, I must tell you, the solution was rather ingenious. The child's father had taken a small, yet rather heavy packet of the children's candy and wrapped it with a voluminous amount of double-sided sticky tape. Having done so, he pitched the projectile directly at the balloon to which it affixed itself. Naturally enough, after it had done so, the laden balloon then descended rapidly to the floor of the village hall!

The retrieval of a balloon from the ceiling of a village hall might at first appear to have little to do with the subject of vision systems design. But those with years of experience in the industry will see it differently.

You see, the solution to the thorny problem of how to retrieve the balloon was only derived through an indirect and creative thought process, using reasoning that was not immediately obvious. And it's that sort of lateral thinking that also sets the successful companies in the vision systems design business aside from the ones that are not.

Tuesday, December 11, 2012

Visions of burgers

An average fast-food restaurant spends $135,000 every year in costs to employ individuals to produce hamburgers. But all that could soon become a thing of the past if the engineers at Momentum Machines (San Francisco, CA, USA) have anything to do with it.

For the engineers there are working on developing a robotic system that can do everything those employees can presently do, except better. Indeed, they are claiming that with their new robotic system in place, the labor savings will enable future restaurants to offer "gourmet quality" burgers at fast-food prices.

The robotic system will be able to offer custom meat grinds for every single customer. So if you want a patty with one third pork and two thirds bison ground after you place your order, that won't be a problem. Aside from mixing up and cooking the meat, the system can also slice toppings like tomatoes and pickles immediately before it places the slices onto a burger, providing customers with the freshest burger possible.

The result of all of this, at least according to the company, is that the consumer will be presented with a product that is more consistent and more sanitary. And since the system will be able to produce 360 hamburgers per hour, there will be more than enough to go around!


While it all might sound a bit far fetched, the team at Momentum Machines has an impressive background. They were trained in mechanical engineering, control systems, and physics at institutions such as Berkeley, Stanford, UCSB and Utah University. And their work experience includes firms such as iRobot, NASA, Sandia National Labs, Semiconductor Technology Associates and Tesla.

They are being advised in their endeavors by Don Fox, the CEO of Firehouse Subs and 2011 National Restaurant News Operator of the year, and The Culinary Edge, a highly esteemed restaurant consulting group. Investment capital is being provided by Lemnos Labs.

Now the results of such a system will clearly have a great impact on the folks who presently work in fast-food chains. So the noble minded folks at Momentum Machines aim to help out those who may need to "transition" to a new job as a result of their technology by offering discounted technical training to any former line cook of a restaurant that deploys the new system.

If you have a degree or two, on the other hand, you might even consider working for the company itself. Currently, it is looking to hire a mechatronics engineer as well as a machine vision specialist to further the development of the system. So if you know anything about vision systems design and love a good hamburger, you know where to go.

Thursday, December 6, 2012

Passing of a legend

Dr. Bryce E. Bayer, the former Eastman Kodak scientist who invented the standard color filter pattern that bears his name, has died.

Aged 83, Bayer died on November 13 in Bath, Maine. According to a report in the New York Times, the cause of Bayer's death was a long illness related to dementia.

In Bayer-based color imagers, pixels on an image sensor are covered with a mosaic of red, green, and blue filters. In the Bayer pattern, 50% of the pixels are green, 25% are red, and 25% are blue. A technique called Bayer demosaicing is used to generate the red, green and blue values of each pixel to obtain a useful image from sensors that employ the Bayer filter.

The American scientist chose to use twice as many green pixels as red or blue to mimic the resolution and the sensitivity to green light of the human eye.

Today, there are other techniques that are used to produce color images. One such technique uses a prism to split light into three components that are then imaged by three separate sensors. Another uses a layered design where each point on the sensor array has photosensitive receptors for all three primary colors.

Despite these advances, the Bayer filter -- which was patented when he worked for Eastman Kodak in 1976 -- is the one that is most commonly found in most consumer cameras, camcorders, and scanners to create color images.

The staff of Vision Systems Design magazine would like to express our sincere condolences to Dr. Bayer's family.

He will be remembered as one of the greatest pioneers of digital imaging technology.

Wednesday, December 5, 2012

Dummies get smart

Any fan of the British TV science fiction series Dr. Who will be only too familiar with one of his oldest arch enemies, a race of creatures called the Autons -- life-sized living showroom dummies that stagger about the streets blowing up anything in their path using weapons concealed within their hands.

When they first appeared on the show back in the 1970s, the Autons created quite a stir amongst the Dr. Who fan base. Despite the fact that they did not look particularly realistic, there was something quite terrifying about their dumb, robotic like movements and expressionless faces that struck fear into the audience at the time.

Since the 1970s, robotics technology has, of course, come a long way. While it might have seemed inconceivable back then that showroom dummies might one day be equipped with any technology to make them more lifelike, or indeed, more intimidating, today it's almost expected of them!

The folks at Kee Square (Milan, Italy) and Almax (Mariano Comense, Italy) a leading manufacturer of mannequins would surely be the first to agree.

For just recently the two companies took the wraps off a new mannequin that embraces an intelligent analytical vision system that can help shop owners across the world "observe" and analyze who is attracted to items in their stores and reveal important details about them such as their age range, gender and race.


The mannequin itself has a camera installed in its head that captures the facial features of people passing through a store -- data which is then analyzed by software to provide statistical and contextual information about them to the store owners. The embedded software can also provide data such as the number of people passing in front of a window and at what times of day they did so.

Fortunately for store customers, the new mannequins are not quite as sophisticated as the Autons in Dr. Who. Although they are somewhat more attractive, they are not, for example, able to move. Nor do they come equipped with any sort of weaponry such as ray guns with which to inflict harm upon innocent shoppers.

But perhaps the best feature about them is that they are made of shock-proof polystyrene and finished with water based paints. That means that when the time comes for them to retire they can be easily recycled into something more useful.

More information on the mannequins can be found here. More information on Dr. Who can be found here.

Friday, November 30, 2012

Vegetables at Tiffany's

Over the past decades, many individuals faced with a lack of work in their own countries have migrated to somewhat more affluent countries to eke out a living.

Many of these folks have moved to countries in Europe and to the United States of America -- either legally or illegally -- to find work in agricultural jobs, performing the sorts of tasks that the residents of those countries would either find too demeaning or too low-paid to take up.

These agricultural jobs usually involving living and working on farms for long hours picking fruit and vegetables for a minimum wage. And although that minimum wage far exceeds what such folks might be able to earn in their own countries, one month's pay is usually barely enough to keep a roof over their head, let alone buy them a nice entrecôte tranchée at the L'Atelier de Joël Robuchon.

Now if things aren't tough enough for these poor migrant workers, they are made to feel even worse by political groups that insist that the menial jobs that they perform pulling potatoes and picking oranges have taken jobs from the natives of those countries, whose lives themselves have become naturally poorer to due the work opportunities that are no longer available.

The farmers in the US and Europe, of course, see things a bit differently. Without such low-paid workers, their produce would not be competitive with farmers from further afield. Indeed, in many cases, even though their pickers and pluckers are paid minimum wage, the farmers still find it hard to compete with other farmers around the world who employ their workers for even less money.

But it looks as if, in the not too distant future, that all of this is about to change, thanks to the deployment of robotic harvesting machinery that is under development in the US and the European Union.

Just this week, for example, Vision Systems Design reported on two new developments in the field (no pun intended). One of these was the development of a $6m project involving engineers at Purdue University and Vision Robotics who have teamed up to develop an automated vision-based grapevine pruner. The second was a fully automatic vision-based robotic system to harvest both white and violet asparagus that is being funded under a European grant.

These projects, of course, represent just the tip of the iceberg. Numerous other projects of a similar nature are development across the world that will revolutionize farming forever. Of course, it may make some time before such systems are perfected, but there's no doubt in my mind that given enough time and effort that they will be.

The future impact on the migrant workers, however, is less clear. Will they then return to their native countries where automation is less prevalent to seek work, or travel further afield? Sadly, whether they run to the west to Tulip, Texas or to the east to Somaliland, their future employment is all used up.

Wednesday, November 28, 2012

Seeking support

When discussing the design of any new vision system with systems integrators, I'm always intrigued to discover what specific hardware and software they chose to implement their systems.

More often than not, the two key reasons any product is chosen, of course, is based on its technical merits and its price. But there is always a third, and perhaps more important reason, that systems integrators opt for the products that they do. And that's the support that they receive from the distributor, or reseller, that sells them the product.

As one might expect, the distributor or reseller that fully comprehends, and can explain both the advantages -- and the disadvantages -- of his products, is more likely to win an order than one that is simply shipping products without much of an idea of how they work or how to integrate them into a system.

But these days, there is more to winning a sale than that. The distributor or reseller that also has some understanding of how his products will fit into the bigger scheme of things has an even greater advantage over those that simply have a basic knowledge of one or two product lines. Indeed, it is this more holistic approach that will almost guarantee that a product is specified into a new machine.

In one recent conversation I had with a systems integrator, he stated quite clearly that his choice of camera, system software and lighting products had been heavily influenced by the reseller that he discussed his machine vision needs with.

That reseller was obviously not only knowledgeable about many aspects of image processing, but was also wily enough to be able to leverage the expertise he passed along to the systems integrator into quite a lucrative sale.


In these strained economic times, however, many companies are reducing the number of experienced folks that they have on board in favor of younger, less well-paid individuals. Naturally enough, however, these folks haven't had enough years in the industry to be au fait with anything other than a basic understanding of their own company's product lines.

Fortunately for the systems integrator that I spoke to, he had been fortunate enough to find and work with a reseller that clearly understood the merits of hiring and keeping experienced multifaceted individuals who could assist him with the task of developing his vision system.

Wednesday, November 21, 2012

No vision at all

Anyone with an X-Box hooked up to a Kinect camera will appreciate the fact that gesture recognition has added all sorts of interactive possibilities to gaming that simply weren't possible before.

But a vision system isn't the only way of detecting the gestures of individuals to enable them to control computer systems, as one company proved this month when it launched an alternative gesture recognition technology that might challenge the role of vision in certain applications.

That company was none other than Microchip Technology (Chandler, AZ, USA), whose so-called GestIC system is based on the idea of equipping a device such as a tablet PC with a number of thin electrodes that create an electric field around the device when an electric current is passed through them.

Once a user's hand then moves into the area around the tablet, the electrical field distribution becomes distorted as the electrical field lines intercepted by the hand are shunted to ground through the human body. The distortion of the field is then detected by a number of receiver electrodes integrated onto the top layer of the device.

To support this concept, Microchip Technology has -- as you might have expected -- produced an integrated circuit named the MGC3130 that not only acts as a signal generator but also contains signal conditioning and analog to digital converters that convert the analog signals from the receivers into a digital format.

Once they are in that format, a 32-bit signal processor analyses the signals using an on-chip software suite that can track the x/y/z position of the hand as well as determine the gestures of a user. These are then relayed to an applications processor in the system that performs commands such as opening applications, pointing, clicking, zooming and scrolling.


While the folks at Microchip Technology believe that the GestIC system will enable the "next breakthrough in human-machine-interface design", and are touting the fact that it offers the lowest power consumption of any 3-D sensing technology -- the technology is still limited to a detection range of up to 15 cm.

So while it does offer an interesting alternative to a camera-based system, I don't think that the folks at Microsoft will be too worried that it will ever compete with their Kinect camera.

Samples of Microchip's MGC3130 -- which comes in a 5x5 mm 28-pin QFN package -- are available today. Volume production is expected in April 2013 at $2.26 each in high volumes. An evaluation kit is available today for $169. More information is available here.

Friday, November 16, 2012

The Italian goal

Those of you who traveled to last week's VISION 2012 show in Stuttgart might have noticed that I wasn't the only editor from Vision Systems Design to attend the event.

That's right. On my trip to Germany I was accompanied by none other than our European editor Dave who was also there to discover what was new, original and inventive in the vision systems business.

During his time at the show, I asked Dave to stop to chat with Signor Donato Montanari, the General Manager of the Vision Business Unit of Datalogic (Bologna, Italy), a company which -- as you may recall -- took over Minneapolis, MN-based PPT Vision last year.

I wanted Dave to find out how a large multinational company like Datalogic was faring in these precarious economic times, as well as to discover what new technical developments, if any, had taken place since the takeover.

On the European front, Dave was hardly surprised to hear that Datalogic vision business had remained pretty much flat this year, since most of Southern Europe is still in the economic doldrums. But Signor Montanari painted a much more optimistic picture of his company's fortunes in the US and Asia. Thanks to the fact that the entire US Datalogic sales force had been brought to bear to sell the new vision product line, business was up ten percent this year in the US and a whopping forty percent in Asia.

But what of the technology that Datalogic inherited, I hear you ask? Well, apparently, there have been some changes there too. While the old PPT had subcontracted out the manufacture of its cameras, the Datalogic management has now brought the manufacturing process in-house.

But that's not all. On the hardware front, the PPT cameras that were based on an embedded PC architecture have now been redesigned and rebuilt based on digital signal processors, resulting in a subsequent cost reduction. And, in a six month effort, the existing PPT drag and drop vision programming software environment has been ported over to them.


Now, as many of you may know, PPT had a rather interesting business model with respect to its cameras and software. If you bought cameras from the company, the software development environment was provided for free. For the time being, it appears as if Datalogic will be keeping to that model.

But next year Signor Montanari said that Datalogic had plans to make the integration of third party software into its software development environment a whole lot easier for engineers than it has been in the past. And he also said that the company was beefing up its technical support centers across the globe to offer the capability of customizing the PPT software for specific customer applications.

Whether the company becomes a dominant player in the machine vision business still remains to be seen. But from listening to Signor Montanari speak, Dave seems convinced that it's a goal that the Italians will be trying their best to achieve.

Wednesday, November 14, 2012

Vision 2012: A Space Odyssey

According the most recent figures released by its organizers, the Stuttgart VISION 2012 show was still the place to be seen for those involved in the machine vision industry. Testifying to that fact, more than 7,000 visitors attended the 25th anniversary of the show last week, roughly the same number that showed up last year.

Unlike previous years, this year all the exhibitors found themselves under one roof in the L-Bank Forum of the Stuttgart exhibition center. And there were plenty of them for the attendees to visit too -- the 25th anniversary of the show saw no less than 372 exhibitors parading their wares – an increase on the 351 exhibitors that attended the show last year.

And what a sight it was too. Unlike previous years, many smaller- to medium-sized companies had opted for much larger booths at this year's show. In a clear attempt to impress the attendees and outdo their competition, they found themselves cheek by jowl with more well established outfits, dwarfing them with booths that appeared to be almost as high as the Bradbury Building.

There was an increase in the number of those exhibitors that came from outside Germany this year too. While last year saw just 46 per cent of those exhibiting come from further afield, this year, the figure was up to 49 per cent. Representing 32 countries in all, the exhibitors brought with them cameras, vision sensors, frame grabbers, software tools, illumination systems, lenses, accessories as well as complete machine vision systems.

Of the attendees to the show, the organizers say that 85 per cent were involved in purchasing and procurement decisions in their company. As you might expect, most of them were primarily interested in machine vision components and applications. But an increasing number of visitors expressed an interest in turnkey machine vision systems as well.


Aside from checking out the new products on display, the VISION show was also a place where one could gain some insight into how vibrant the vision system industry is. At the VISION Press lunch held on Tuesday November 6, for example, Dr. Olaf Munkelt, the Managing Director of image processing software vendor MVTech Software and Chairman of the Executive Board of the VDMA Machine Vision Group presented an overview of the state of the German machine vision market.

The figures he showed highlighted the fact that the total turnover for machine vision systems in Germany was expected to remain pretty much flat this year, with a growth of just two percent in 2013. But there was better news from outside Germany, where orders for machine vision systems were predicted to rise at a somewhat higher rate.

But not every company is experiencing low growth rates. One executive that I ran into on my way back to England from VISION 2012 claimed that his company had experienced a remarkable 20 per cent growth in orders this year, a trend he clearly expected to continue next year as he was actively looking to hire a number of engineers to meet the demand for his products.

Next year, the VISION 2013 will be staged two months earlier from September 24 to 26 2013, so none of us will have quite as long to wait to get our next dose of machine vision technology.

But will that be long enough for those involved in our industry to really develop any game changing technology? One company owner I spoke to didn't think so. He said that his outfit would be doing no more than demonstrating the same products that he displayed this year. By then, he said, at least his engineering team might have had time to iron out all the bugs in them!

Friday, November 2, 2012

Visions of the future

Twenty five years ago, a machine vision system that performed a simple inspection task may have cost $100,000. Today, a similar system based around a smart camera can perform the same task for $3,000.

The decrease in the price of the sensors, processors and lighting components used to manufacture such systems has been driven by the widespread deployment of those components in high-volume consumer products. And that trend is likely to continue into the future.

As the cost of the hardware of such systems has decreased, so too have the capabilities of integrated software development environments. As such, rather than hand code their systems from scratch, designers can now choose from a variety of software packages with large libraries of image processing functions which they can use to program their systems.

The combination of inexpensive hardware and easy to use programming tools has enabled OEM integrators to develop systems in a much shorter period of time than ever before, offering them the possibility of developing several systems each year for customers in a variety of industries.

The result of the decreased price of hardware and the ease of use of many software packages has also allowed many sophisticated end users to take on the role once performed by the systems integrator, developing their own systems in house rather than turn to outside expertise.

Over the next ten years, engineers can expect to see more of the same. As the system hardware decreases in price, however, they can also expect to see companies develop more highly specialized processors, sensors and lighting systems in an attempt to differentiate their product lines from those of their competition.

On the software front, developers will continue to refine the capabilities of their software packages as well as adding greater capabilities while driving down the cost of deployment by offering subsets of their full development environments in the form of software apps to their customers.


As 3-D hardware and software becomes more prevalent, designers will also be challenged to understand how capturing and processing images in 3-D might enable them to develop more complex systems to tackle their vision systems applications.

In the December issue of Vision Systems Design, I'll be bringing out my crystal ball to see if I can predict some more emerging trends in the field of machine vision. Be sure to pick up your copy when it lands on your doormat.

Wednesday, October 31, 2012

Galilean Christmas time

Regular readers of this blog might remember that a week or so ago I reported on a New Zealand engineer by the name of Mark Hampton who is attempting to fund the development of a right-angled lens for the Apple iPhone camera and Apple iPad through a site called Kickstarter.

As I mentioned before, the New York City-based Kickstarter web site is a funding platform which enables creative individuals to post ideas for potential new products on the site.

If readers of the web site like a particular product enough to buy it, they can pre-order it by pledging money to the company that has designed it. If the company then succeeds in reaching its funding goal to manufacture the product, all backers' credit cards are charged and the products are produced and delivered. If the project falls short of reaching its goals, no one is charged.

Well, I'm now pleased to report that $17,030 has already been pledged for Hampton's project and with only a $27,500 target to hit, it now looks as if his dream of making his product a reality will soon come true.

While tracking the fortunes of Hampton, I’ve also been checking out the other products that the Kickstarter web site has successfully funded. And I’m pleased to say that I have found one in particular that would make the perfect present for a whole bunch of my friends this Christmas.

The product itself is an iPhone platform called Galileo that can be controlled remotely from an Apple iPad or other iOS device. Capable of 360 degree pan-and-tilt at speeds up to 200 degree per second in any orientation, Galileo should prove not only useful amateur photographers but also folks with babies and toddlers who'd like to keep an eye on their activities!

Rather amazingly, to put the little beast into production, entrepreneurs Josh Guyot and JoeBen Bevirt were originally seeking pledges of up to $100,000 on the Kickstarter site, but it appears that the project garnered so much interest that over 5,000 people backed the idea with the result that the team raked in a whopping $702,427.

Unfortunately, much to my dismay, it's still impossible to actually purchase one of the little robotic beasts from Motrr (Santa Cruz, CA, USA) -- the company that the duo set up to sell the units. At the present time, the best that one can do is to sign up to be notified via email when the Galileo will be available for sale.

Hopefully, that will be in time for Christmas.

Friday, October 26, 2012

Attention seeking

Have you ever nodded off during a lecture or a seminar? I know I have. In my case, it's usually when I'm presented with long-winded discussions about the financial state of the economy, rather than when I'm treated to an engaging treatise on how a particular individual has developed an innovative vision inspection system.

As a student, I had the same problem. If the lecturer wasn't particularly engaging, I found that my mind tended to wander to some other entirely more fanciful place, where I imagined that I might be occupied by some all together more interesting activities.

Recognizing that other students have the same attention deficit disorders, a professor of physics education at Kennesaw State University (Kennesaw, GA, USA) has been trying to uncover why by equipping them with eye-tracking technology during lectures in the classroom.

His first-of-its-kind study aims to provide new insights into effective teaching techniques that can keep students engaged and motivated to learn during lectures.

By using glasses equipped with eye-tracking technology from Tobii Technology (Danderyd, Sweden), Professor David Rosengrant was able to measure what students observe during a lecture, how much of their time was dedicated to the material presented in the class, and discover what the factors were that distracted them the most.

Professor Rosengrant's pilot study was held over a four-month period with eight college students in 70-minute pre-elementary education lectures at Kennesaw State University.


The study discredited the widely accepted belief that classroom attention peaks during the first 15 minutes of class and then generally tapers off. Instead, Rosengrant discovered that classroom attention is actually impacted by various factors throughout the duration of lecture.

Those factors include the verbal presentation of new material that is not contained within an instructor's PowerPoint presentation, the use of humor by the instructor and the proximity of the instructor to the student, which all contribute to greater attention from the student.

Professor Rosengrant's study also concluded that "digital distractions" such as mobile phones and the Web -- particularly Facebook -- are the greatest inhibitors to retaining students' attention in the classroom.

When I read that, I started to get a bit hot under the collar. While I appreciate that not all academics are born teachers, the least the students who attend their classes can do is to respect them enough not to fiddle with their digital paraphernalia during their lectures. Even I would show a poor presenter that amount of consideration, and that's saying something.

Related items on eye tracking technology from Vision Systems Design that you might also find of interest.

1. Eye tracking helps spot movement disorder

Tobii Technology (Danderyd, Sweden) has selected a behavioral research team that used eye-tracking technology to enhance its understanding of Progressive Supranuclear Palsy (PSP) as the winner of its annual Tobii EyeTrack Award.

2. Researchers can track what catches a designer's eye 

An eye-tracking system developed by researchers at The Open University and the University of Leeds (Leeds, UK) aims to remove the constraints on creativity imposed by computer-aided design (CAD) tools.

3. Eye test for Alzheimer's disease 

UK researchers have demonstrated that people with Alzheimer's disease have difficulty with one particular type of eye-tracking test.

4. Eye tracker spots liars with greater accuracy

Computer scientists at the University of Buffalo (UB; Buffalo, NY, USA) are exploring whether machines can read visual cues that give away deceit.

Wednesday, October 24, 2012

Image analysis makes shopping simple

A 25-year old with a master's degree in computer science from Bristol University (Bristol, UK) has picked up a $100,000 cash prize after winning the Cisco British Innovation Gateway Awards for developing a novel image matching application that can take the drudgery out of shopping for clothes.

Jenny Griffiths -- one of only two women in her class of thirty at the university -- developed the idea after getting frustrated with attempting to locate and purchase clothes that she liked at a reasonable price. So she went out and used her new found knowledge to make the whole process a whole lot easier.

The result of her hard work is what's now known as "Snap Fashion" -- a visual search engine that lets consumers search for clothing using images instead of words.

In a nutshell, it comprises a free a smartphone app that a user first fires up to take an image of a product that she might like to buy but perhaps can't afford. The image is then delivered to the Snap Fashion server where algorithms developed by Griffiths automatically analyze it and return images of similar -- hopefully less expensive products -- from a variety of retailers' websites within five seconds. The results can then be filtered based on any aspect of the product -- such as the color and cut of a dress. Finally, an item can be purchased directly from the retailer, while Snap Fashion earns commission on every sale.

The net is cast wide courtesy of Snap Fashion's database, which currently counts more than 100 major retailers. It's a catalogue that boasts high street giants including Gap, Jigsaw, Jaeger, Uniqlo, Warehouse, L K Bennett, French Connection, Reiss, Monsoon, and Kurt Geiger, in addition to retailers from Mywardrobe.com to Stylebop.com to Farfetch.com, and a host of department stores such as Harrods, Selfridges, Liberty, House of Fraser, and US fashion emporium Bloomingdales.

Not just a shopping tool, there are other tricks to Snap Fashion too, such as a personal shopping service that offers tips and advice on what styles best suit the user's personal body shape via body shape recognition technology.


The Cisco British Innovation Gateway Awards were launched this year with the aim of recognizing and supporting up-and-coming innovators, entrepreneurs and businesses. And naturally enough, I'm delighted that one of the first winners of the award has developed a product related to image analysis!

But if you are a budding inventor in the UK and feeling a bit miffed that you hadn't heard of the awards in time to enter them this year, don't worry. The good news is that the contest -- which aims to attract high-potential technology startups that are seeking investment and support -- is running over a five period.

More information on the Cisco British Innovation Gateway Awards can be found here. Snap Fashion's home page can be found here.

Friday, October 19, 2012

Kick starting manufacturing

In today's tough economic climate, it's difficult for small teams of engineers to obtain funding from the banks to finance the development of their new products no matter how original or innovative they might be.

Now, however, thanks to a website called Kickstarter, there's an alternative way that engineers with bright ideas can reduce the financial burden of getting their new products into the hands of early adopters.

The folks behind the New York City-based Kickstarter web site describe it as an "all-or nothing" funding platform which enables creative individuals to post ideas for potential new products on the site.

If readers of the web site like a particular product enough to buy it, they can pre-order it by pledging money to the company that has designed it. If the company then succeeds in reaching its funding goal to manufacture the product, all backers' credit cards are charged and the products are produced and delivered. If the project falls short of reaching its goals, no one is charged.

One of the individuals excited about the Kickstarter site is Auckland, New Zealand-based engineer Mark Hampton who is hoping to raise enough funding on the site to make his dreams of producing a right-angled lens for the Apple iPhone camera and Apple iPad come true.

Hampton started the development of his so-called HiLO lens in 2011. Since then he teamed up with an optical engineer, a mechanical designer, and an app developer to demonstrate the effectiveness of a prototype of the device which he now hopes to take into full production through his Kickstarter campaign.

According to Hampton, the HiLO product is built from three custom designed lenses and a prism. A free app that will come with the product corrects for the mirroring of the image caused by the prism and improves image quality.


It's a pretty simple concept, but one that might well fill a niche in the market for individuals who want to take high angle and low angle photos on their iPhones.

Backers on Kickstarter can pre-purchase one of Hampton's HiLO lenses now. If the backers pledge a total of $27,500, then the pledges will be collected and an initial production run of the lens system will be made in China. So far, Hampton and his team have raised $8,450. Hopefully, more support will be forthcoming!

Mark Hampton's project page on the Kickstarter web site can be found here.

Thursday, October 11, 2012

Good times in Asia?

A UK researcher made a projection this month that Asian countries -- excluding Japan -- will be the largest market for machine vision systems in 2016.

According to John Morse, the author of the latest machine vision report from IMS Research (Wellingborough, UK), Japan has always been the largest market for machine vision in the Asia Pacific region. But despite this, Japan's economic growth is currently slow largely due to decreasing demand for its exports.

Morse says that this is not expected to improve much over the next five years, because Japan's leading position is being eroded as other countries within the region embrace automation in their production facilities.

Nevertheless, the report claims that the Asian region itself -- with the exception of Japan that is -- is collectively forecast to generate revenues from sales of machine vision systems that will exceed those generated in the Americas after 2012. This rapid growth is expected to continue -- revenues from the Asian region will even surpass revenues generated in Europe, the Middle East and Africa (EMEA) after 2015.

The report projects that the strongest growth for machine vision systems will be in China, South Korea and Taiwan, reflecting the general economic growth forecast in these countries.

The latest outlook from the International Monetary Fund (IMF) would appear to give a lot of credibility to the IMS report. The IMF is projecting, for example, that in Asia, growth in Real Gross Domestic Product (GDP) will average 6.7 percent in 2012, and is forecast to accelerate to 7.25 percent in the second half of 2012.

In its latest World Economic Outlook, unveiled in Tokyo ahead of the IMF-World Bank 2012 Annual Meetings, the IMF said that the advanced economies, however, were unlikely to fare as well.

In the US, growth will average 2.2 percent this year. Real GDP is projected to expand by about 1.5 percent during the second half of 2012, rising to 2.75 percent later in 2013.

In the Euro area, it's not even that rosy. There, real GDP is projected to decline by 0.4 percent in 2012 overall during the second half of 2012 with public spending cutbacks and the still-weak financial system weighing on prospects.

But despite the bright prospects that both IMS and the IMF have painted for the folks in Asia, I can't help but feel that -- with decreasing exports to the US and Europe -- they might just see their growth stunted too.

Tuesday, October 9, 2012

Surf's up

When my nephew left college with a master's degree in computer science and electronic engineering, he was headhunted by more than a few firms, some of which were in the field of engineering and some of which were in the field of financial services.

Being a talented young man, he was faced with a choice -- should he take one of the jobs he was offered by one of the engineering companies, or should he accept a more lucrative position at a financial organization in The Big City.

After deliberating the issue for several days, he decided to let his heart rule his wallet and took a job working for a software development company, rather than swan off to make his fortune working in a profession that was somewhat unrelated to his education.

It's an issue many graduates are faced with. After spending tens of thousands of dollars on their education, they are inevitably drawn to the idea of making as much money as possible to pay off their loans, even if it means leaving the field of engineering to do so.

What brought this issue home to me again this week was a recent article in "The Dartmouth", the daily student newspaper of Dartmouth College, which just happens to be America's oldest college newspaper.

The article -- which was written by Hannah Wang -- detailed the development of an Android application that uses data captured by the camera in a smartphone to analyze a person's driving habits. To do so, the application analyzes drivers' physical motions, such as head turning and blinking rates.


The application came about as the result of a project by a chap called Thomas Bao who graduated this year from the college. Apparently, Bao began the project knowing little about machine learning, computer vision or Java. But that didn't stop him from studying the subject rigorously enough to develop the application.

Since then, Bing You, a visiting researcher from Taiwan's Academia Sinica, has integrated Bao's driver-side app into a larger project called CarSafe, which uses the dual cameras on a smart phone to detect both driver-side and road-side information to alert drivers about potentially dangerous situations, such as unsafe following distances.

Having now left the college, however, the talented Bao has waved goodbye to the engineering profession. According to the article, he is now working at Evolution Capital Management, a hedge fund based in Hawaii.

Now the article, of course, doesn't specifically say why Bao chose to do so. It may have been for financial reasons, or it could have been because he is fond of big wave surfing. But whatever the reason, I think it's a shame that Bao and many other talented folk like him don’t remain in the engineering business like my nephew chose to do.

Reference:

1. Professor creates phone app for safer driving habits

Wednesday, October 3, 2012

Guns N' Reshoring

A Michigan State University (East Lansing, MI, USA) academic has authored a new study that claims that many US firms are moving or considering moving their manufacturing operations back to domestic soil from overseas.

According to Tobias Schoenherr, an assistant professor of supply chain management, rising labor costs in emerging countries, high oil prices and increasing transportation costs and global risks such as political instability are fueling the trend.

"Going overseas is not the panacea that it was thought of just a decade or so ago. Companies have realized the challenges and thus are moving back to the US," says Schoenherr.

Schoenherr's study found that 40 per cent of manufacturing firms believe there is an increased movement of "reshoring" -- or moving manufacturing plants back to the US from countries such as China and India. While the results differed by industry, the trend was led by aerospace and defense, industrial parts and equipment, electronics, and medical and surgical supplies.

The study, which was sponsored by the Council of Supply Chain Management Professionals and based on a survey of 319 firms, also found that nearly 38 per cent of companies indicated that their direct competitors have already reshored.

In addition to rising costs and global risks, Schoenherr said companies are concerned with the erosion of intellectual property overseas and product quality problems, which can be difficult to fix when dealing with multiple time zones and language and cultural barriers.

Rob Glassburn, the Vice President of Operations at 3D Engineering Solutions (Cincinnati, OH, USA), would be the first to agree with Schoenherr. In a recent blog, Glassburn described how his company had recently been called in to reverse engineer parts for a popular airsoft gun maker that once manufactured its products abroad. And that, according to Glassburn, was a direct consequence of such intellectual property theft.


Specifically, Glassburn wrote, problems arose when the particular gun maker learned that its offshore manufacturer was using its proprietary tooling to "back door sell" guns made with its own equipment. To curb the practice, it closed down operations and moved manufacturing back to the US, which helped secure 300 domestic jobs.

Sadly, however, no CAD models or prints of the gun parts were accessible when the gun maker returned production back to the US, and that's why 3D Engineering Solutions was called in. By employing the company's 3D laser scanning technology to digitize the assembly of air gun parts, the company was then able to create tooling to manufacture parts in the US once more.

One can only hope that if the trend to reshore continues, it will also mean more business for those machine builders in the vision industry who develop systems to automate the process of inspecting those products as well!

Tuesday, October 2, 2012

See-through soil simplifies root imaging


As anyone who knows me will testify, I'm about as fond of gardening as I am of golf. My ideal garden would either be covered over with concrete or short pile synthetic turf, thereby eliminating the need to maintain a lawn or take care of any plants and shrubs.

Despite that fact, I'm always intrigued to read how researchers and scientists across the world are using innovative image processing systems to analyze the behavior of plants.

There's no doubt that by studying the growth of the roots of plants, and determining what factors influence it, scientists might develop hardier variety of crops that might be more resistant to disease and climate change.

In February this year, one team of researchers at the University of Nottingham (Nottingham, UK) was awarded a 3.5m Euro grant to do just that. They plan to image wheat roots in a move that will enable them to select new agricultural varieties that are more efficient at water and nutrient uptake.

To do so, the researchers there plan to use X-ray Micro Computed Tomography to capture images of the shape and branching patterns of roots in soil. Those images will then be fed into the researchers "RooTrak" software which overcomes the problem of distinguishing between roots and other elements in the soil.

Now, however, discerning the roots of the plants from the soil surrounding them could become a lot easier, thanks to a team from the James Hutton Institute (Aberdeen, Scotland) and the University of Abertay (Dundee, Scotland) who have developed a see-through soil based on a synthetic composite known as Nafion.

They claim that the product is very similar to real soil in terms of physical and biological variables, in terms of its water retention, its ability to hold nutrients and its capability for sustaining plant growth.


Lionel Dupuy, a theoretical biologist in the ecological sciences group at the James Hutton Institute, said that the transparent soil could be used by researchers to study the spread and transmission of soil borne pathogens, screen the root systems of a range of genotypes, as well as understand how plants or microbes access nutrients that are heterogeneously distributed in the soil.

While the formulation of the new soil may well have taken the scientists two years to perfect, to me, the real lesson to be learned from its development is the degree of lateral thinking that the researchers employed to solve the problem of how best to capture images of roots in soil.

Rather than just throw complex hardware and software at the problem, they took a completely different approach by creating a new media that may ultimately enable researchers to image the roots of plants using vision systems that are a lot simpler than those that are in use today.

References:

1. Software gets to the root of the problem


A team of researchers at the University of Nottingham (Nottingham, UK) has developed image analysis software that can automatically distinguish plant roots from other materials found in soil.

2. Robotic image-processing system analyzes plant growth

Researchers at the University of Wisconsin–Madison (Madison, WI, USA) have developed an image-processing system that captures time-lapse images of how plants grow.

3. Cameras get to the root of global warming

Researchers at the Oak Ridge National Laboratory (Oak Ridge, TN) are to use a system of minirhizotrons to examine the effects on elevated temperatures and levels of carbon dioxide on the roots of plants in wetlands.

Tuesday, September 25, 2012

Weisswurst, weissbier, and vision systems

Last week, I dispatched our industrious European Editor Dave Wilson off to the rather lovely Bavarian city of Munich to gain some insight into the work that is being undertaken by companies in the region.

During his brief sojourn in Germany, Dave met up with a number of outfits involved in the business of developing vision systems. One of these was Opto -- a small to medium-sized private enterprise with around 35 employees based in the town of Grafelfing on the outskirts of Munich.

Now at the outset, it might seem that a company of such a size might not have a whole lot to discuss. But first appearances can be deceptive, as Dave discovered when Markus Riedi, the President of Opto, gave him a brief presentation on what the company had been up to over the years.

During that presentation, Dave realized that, while the company might best be known for the optical components that it markets, in fact, around 55 percent of its business comes from developing rather complex custom-built products, where it combines its expertise in optics, mechanics, software and electronics to deliver complete modules that its customers can integrate into their own machines.

Herr Riedi showed Dave several examples of the sorts of engineering projects that the company had undertaken. One was an integrated imaging module developed for the inspection of semiconductor dies. Another was an optical subsystem used to inspect pixels on an LCD screen. Then, there was an opto-mechanical module for integration into a laser eye surgery system. And, last but not least, was an imaging system the company had developed to image cells in an embryo incubation machine.



After the presentation, Herr Riedi told Dave that his company was very selective about the companies that it works with to develop products, and only targets areas where the company can provide a lot of value added expertise.

And the strategy appears to be paying off. From a 0.5m Euro business in 2006, Herr Riedi has grown the company to the 7m Euro business that it is today. By 2020, he told Dave that he hopes to push that figure up to the 20m Euro mark.

One way he plans to do that is to actively promote the products that his customers are manufacturing. The idea is a simple one -- the more products they sell, the more subsystems that Opto sells. To do so, Reidi has already started to populate his company's web site with examples of the end-user products that his complex optical subsystems have been designed into.

Impressed with the caliber of companies like Opto, Dave is now looking forward to the day when he might take another trip to Bavaria to meet up with yet more folks involved in the imaging business. But although he tells me that his motives are purely altruistic, I have a suspicion that the quality of the local Bavarian weisswurst and weissbier might also have something to do with it.

Friday, September 21, 2012

Vision Systems in Action

As regular readers of this blog might recall, a few weeks ago I decided to hold a competition in which I challenged systems integrators to email me images of their very own vision systems in action.

To encourage readers to enter the aptly named "Vision Systems in Action 2012" competition, I promised that the winning images that we received would be published in an upcoming blog, providing the winners with lots of publicity and, potentially, a few sales leads as well.

Because the competition didn't come with any prizes, however, the response was less than spectacular. Nevertheless, the Vision Systems Design judging panel were impressed by the high standard and diversity of the photographs we did receive. And now, after several hours deliberating over the entries, I'm pleased to say that our judges have chosen a winner as well as a runner up.

The winner of the "Vision Systems in Action 2012" competition is none other than Earl Yardley, the Director of Industrial Vision Systems (Kingston Bagpuize, UK) who submitted a rather stunning image of a vision system his company has developed to inspect a medical device.



The judges unanimously decided that Yardley's photograph should take first prize, not only for its quality, but the fact that it followed the overall brief set by the judging panel. They were particularly impressed by the photographer's use of lighting as well as the effective use of the color blue which dominated the image.

The runner-up in the "Vision Systems in Action 2012" competition was Vincent Marcoux, the sales and marketing co-ordinator of Telops (Quebec, Canada). He submitted a rather stunning picture of the Chateau Frontenac which was designated a National Historic Site of Canada in 1980. Marcoux captured the image of the chateau using the company's very own HD-IR 1280 x 1024 infrared camera.


The judges were extremely impressed by the exquisiteness of the image, as well as the sense of foreboding that it conveyed. Our panel was particularly taken by the effectiveness of the infrared imaging technique as well as the striking use of the color orange which dominated the image.

As the Editor-in-Chief of Vision Systems Design, I would like to thank everyone for their interest in the "Vision Systems in Action" competition and for taking the time and effort to participate. Perhaps next year, we shall do it again.

Thursday, September 20, 2012

Build your own supercomputer

Many image processing tasks are computationally intensive. As such, system integrators are always on the lookout for any means that will help them to accelerate their application software.

One way to do this is to determine whether an application could be optimized -- either by hand or by using optimization tools such as Vector Fabrics' (Eindhoven, The Netherlands) Pareon -- to enable it to take advantage of the many processing cores that are in the latest microprocessors from AMD and Intel.

If an application can be considered to be easily separated into a number of parallel tasks -- such as those known in the industry as "embarrassingly parallel problems" -- then the only limitation the systems integrator has is how to source enough inexpensive processors to perform the task.

Fortunately, since the advent of the GPU, cores are plentiful. As such, many engineers are harnessing the power of games engines such as GE Force’s GTX 470 -- which sports no less than 448 CUDA cores and 1GByte of memory -- to vastly accelerate their image processing applications.

Now in a few cases where engineers really need to harness even more hardware power, they have only one alternative -- build it themselves. That, indeed, is exactly what engineers at the Air Force Research Laboratory (Rome, NY, USA) have done.

Their massive supercomputer -- which was developed for the Air Force for image processing tasks -- is ranked as one of the fortieth fastest computers in the world. Yet, believe it or not, it has been constructed by wiring together no less than 1,700 off-the-shelf PlayStation 3 gaming consoles!


Now if you are anything like me, you are probably wondering how you might be able to design such a beast yourself, while doing so without shelling out an inordinate sum of money to buy so many Sony games consoles.

If you do, you might like to check out the web page of Professor Simon Cox from the University of Southampton (Southampton, UK), who, together with a team of computer scientists at the university (and his six year old son James) has built a supercomputer out of Raspberry Pi's, a rats nest of cables and an awful lot of Lego.

"As soon as we were able to source sufficient Raspberry Pi computers we wanted to see if it was possible to link them together into a supercomputer. We installed and built all of the necessary software on the Pi starting from a standard Debian Wheezy system image," says Professor Cox.

The machine, named "Iridis-Pi" after the university's Iridis supercomputer, runs off a single 13A mains socket and uses a Message Passing Interface to enable the processing nodes to communicate over Ethernet. The system has a total of 64 processors and 1Tb of memory (16GByte SD cards for each Raspberry Pi).

Now I'm not about to claim that this supercomputer is going to rank up there with the PlayStation-based system built for the US Air Force, but it certainly would be a fun project to build and experiment on. And at a price of under $4000, who wouldn't want to give it a go?

Fortunately, for those interested in doing so, the learned Professor has published a step-by-step guide so you can build your own supercomputer Raspberry Pi supercomputer without too much effort.

The Southampton team wants to see the low-cost supercomputer used to enable students to tackle complex engineering and scientific challenges. Maybe the system isn't really the most cost effective way to do that, but it certainly is inspirational.

Editor's note:  PA Consulting Group and the Raspberry Pi Foundation have teamed up to challenge schoolchildren, students and computer programmers to develop a useful application using a Raspberry Pi that will make the world a better place. I'm sure they would welcome ideas from the imaging community! Details on the competition can be found here.


Friday, September 7, 2012

Turn your iPhone into an IR camera

If you live an old drafty house like I do, you're probably not looking forward to another long cold winter -- not in the least because you will inevitably find yourself shelling out exorbitant sums of money just to keep the place nice and toasty.

Fortunately, since the advent of thermal imaging cameras, it's now pretty easy to identify patterns of heat loss from your property and to then take some remedial action to fix any problems.

Due to the cost of the cameras, however, it's unlikely that you will want to go out and buy one yourself. It's more likely that you will call on the services of a professional home inspector or energy auditor who will bring their own thermal imaging kit around to your properties to perform the task.

Even a professional survey, however, isn't likely to come cheap, although probably a darned sight less expensive than buying your own camera.

Faced with these two alternatives, engineer Andy Rawson decided to turn his iPhone into a thermal camera by developing custom-built hardware and software solution that would interface to it.

More specifically, Rawson designed a PCB board that sports a Melexis (Ieper, Belgium) MLX90620 FIRray device which can measure thermal radiation between -20°C to 300°C thanks to its 16 x 4 element far infrared (FIR) thermopile sensor array. The software then transmits the thermal images collected by the infrared sensor on Rawson’s board to the iPhone through its dock connector after which they are overlaid onto the phone's display together with numerical temperature values.



Having developed the hardware and the software, Rawson says that he would now like to make and sell the systems so others can save money and energy. He figures he should be able to manufacture and sell them for around $150.

Nevertheless, this is also going to be an open source hardware project, so if you want to make your own systems, that's fine by him too. A man of his words, Rawson posted the iPhone code and the board layout on the internet this week. Interested readers can find it here.

While he might be a talented engineer, Rawson admits that he is terrible at dreaming up names for his projects! So he's encouraging people to submit names for the new design to his web site. The winner will receive one of the thermal imaging systems for free.

A video of the thermal imaging system in action can be seen on YouTube here.

Tuesday, September 4, 2012

Kinect comes home

Three Swedish researchers from the Centre for Autonomous Systems (CAS) at the Kungliga Tekniska Hogskolan (KTH) in Stockholm, Sweden are asking people to get involved in a crowd sourcing project to build a library of 3-D models of objects captured using their Microsoft Kinect cameras.

The idea behind the so-called Kinect@Home project -- which was started by Alper Aydemir, Rasmus Goransson and Professor Patric Jensfeltaims -- is to attempt to acquire a vast number of such models from the general public that robotics and computer vision researchers can then use to improve their algorithms.

The researchers chose the Microsoft Kinect camera for some pretty obvious reasons. Not only can it be used to capture both RGB images and depth values of objects, since its launch it has entered the homes of some 20 million people, making it a perfect piece of hardware for a crowd sourcing task.

Before any captured image frames of an object from the Kinect can be uploaded to the Kinect@Home server, users first need to connect their Kinect camera to their PC and install a plug-in. Once they have done so, the website starts showing the live Kinect images on a browser to confirm that the software is working correctly.

Next, the plug-in can be used to start uploading captured frames of an object to the researchers Kinect@Home server. After uploading is complete, optional metadata can be associated with the model of the object. As well as uploading their own models to the site, users can also download models created by others and import them into their own 3-D modeling software packages.

To display the models over the web, the resolution of the models has been lowered at present, but the researchers say that as they acquire faster servers and more bandwidth, this will change dramatically.

At present, the Kinect@Home browser software plug-in only runs on a PC running Microsoft Windows Vista, Windows 7 and 8, but the Swedish software engineers would be pleased to talk to any other software developers that might be interested in porting the browser plug-in to the Linux or Mac operating systems, as well as providing support for Microsoft’s Software Developer Kit.

If you do give the software a try and your models look a bit messed up when they appear in the browser, it's probably your fault. To get the best results from the system, the software developers advise users to move their Kinect cameras slowly and not to point them towards blank walls, featureless or empty spaces.

Personally, I'm tempted to go out and buy a Kinect just to see what Kinect@Home is like. But if you already have one, you can try out the software here.

Thursday, August 30, 2012

Robots with vision piece together damaged coral

The deep waters west of Scotland are characterized by the occurrence of large reef-forming corals that provide homes to thousands of animals. But Scottish corals are threatened by adverse impacts of bottom fishing that damages and kills large areas of reef.

At present, the only solution to the problem is to employ scuba divers to reassemble the coral fragments on the reef framework. However, the method has had only a limited success because the divers cannot spend long periods underwater nor reach depths of over 200 meters where some of the deep-sea coral grows.

Now, however, researchers at Heriot-Watt University (Edinburgh, Scotland) are embarking on a project that will see the teams of scuba divers replaced by a swarm of intelligent robots.

The so-called "Coralbots" project is a collaborative effort led by Dr. Lea-Anne Henry from the School of Life Sciences in partnership with Professor David Corne from the School of Mathematical and Computer Science and Dr. Neil Robertson and Professor David Lane from the School of Engineering and Physical Sciences.

Their idea is to use the small autonomous robots to seek out coral fragments and re-cement them to the reef. To help them do just that, the computers on board the robots will distinguish the fragments from other objects in the sea through the use of object recognition software which is under development.



If the researchers can realize their goals, swarms of such robots could be instantaneously deployed after a hurricane or in a deep area known to be impacted by trawling, and rebuild a reef in days or weeks.

While it might seem pretty ambitious, the folks at Heriot-Watt have got plenty of experience in underwater robotics and signal and image processing. At the university's Ocean Systems Lab, they have previously developed obstacle avoidance and automatic video analysis algorithms, as well as autonomous docking and pipeline inspection systems.

The team of researchers working on the new project is supported by Heriot-Watt Crucible Funding which is specifically designed to kick-start ambitious interdisciplinary projects.

Reference: Underwater robots to 'repair' Scotland's coral reefs. BBC technology news.

Wednesday, August 29, 2012

Competition time

Since its launch in 1990, the Hubble telescope has beamed hundreds of thousands of images back to Earth, shedding light on many of the great mysteries of astronomy.

But of all the images that have been produced by the instruments on board the telescope, only a small proportion of them are visually attractive, and an even smaller number are ever actually seen by anyone outside the small groups of scientists that publish them.

To rectify that matter, the folks at the European Space Agency (ESA) decided to hold a contest that would challenge members of the general public to take never-before-publicized images from Hubble's archives and to make them more visually captivating through the use of image processing techniques.

This month, after sifting through more than 1000 submissions, the ESA has decided on the winner of its so-called Hubble's Hidden Treasures competition -- a chap by the name of Josh Lake from the USA who submitted a stunning image of NGC 1763, part of the N11 star-forming region in the Large Magellanic Cloud.

Lake produced a two-color image of the NGC 1763 which contrasted the light from glowing hydrogen and nitrogen. The image is not in natural colors because hydrogen and nitrogen produce almost indistinguishable shades of red light, but Lake processed the images to separate out the blue and red, dramatically highlighting the structure of the region.


Through the publicity gained from the exercise, the organizers of the competition have undoubtedly attracted numerous people to the Hubble web site to see the many other spectacular images produced by the other folk that entered the contest.

Here at Vision System Design, I’d like to emulate the success of the Hubble's Hidden Treasures competition by inviting systems integrators to email me any astonishing images that they may have taken of their very own vision systems in action.

My "Vision Systems in Action" competition may not come with any prizes, but I can promise that the best images that we receive will be published in an upcoming blog, providing the winners with lots of publicity and, potentially, a few sales leads as well.

If you do decide to enter, of course, please do take the time to accompany any image you submit with a brief description of the vision system and what it is that it is inspecting. Otherwise, you will be immediately disqualified!

The "Vision Systems in Action" competition will close on September 15, 2012. You can email your entries to me at andyw@Pennwell.com.

Tuesday, August 21, 2012

Hiding from the enemy

Camouflage is widely used by folks in the military to conceal personnel and vehicles, enabling them to blend in with their background environment or making them resemble anything other than what they really are.

In modern warfare, however, a growing number of sensors can 'see' in parts of the spectrum where people cannot. Therefore, designing camouflage for a wide variety of terrains, and enabling it to be effective across the visual, ultraviolet, infrared and radar bands of the electromagnetic spectrum is crucial.

One way to do this is to examine how the natural camouflage of animals enables them to hide from predators by blending in with their environment, and then mimicking those very same techniques using man-made materials.

Thinking along such lines, a team of researchers from Harvard University (Cambridge, MA, USA)  announced this month that they have developed a rather interesting system that allows robots inspired by creatures like starfish and squid to camouflage themselves against a background.

To create the camouflage, the researchers create fine micro-channels in sheets of silicone using 3-D printers which they then use to dress the robots. Once they are covered with the sheets, the researchers can pump colored liquids into the channels, causing the robots to mimic the colors and patterns of their environment.


 The system's camouflage capabilities aren't limited to visible colors, however. By pumping heated or cooled liquids into the channels, the robots can also be thermally camouflaged. What's more, by pumping fluorescent liquids through the micro-channels, the silicone sheets wrapped around the robots can also be made to glow in the dark.

According to Stephen Morin, a postdoctoral fellow in the Department of Chemistry and Chemical Biology at Harvard University, there is an enormous amount of spectral control that can be exerted with the system. In the future, he envisages designing color layers with multiple channels which can be activated independently

Dr. Morin believes that the camouflage system that the Harvard researchers have developed will provide a test bed that will help researchers to answer some fundamental questions about how living organisms most efficiently disguise themselves.

For my money, however, it might be more lucrative to see if the camouflage could be deployed to help the military hide its personnel in the field more effectively.

Reference: Camouflage and Display for Soft Machines, Science magazine, 17 August 2012:  Vol. 337 no. 6096 pp. 828-83.

Friday, August 17, 2012

The tender trap

One of the great advantages of being the head editorial honcho here at Vision Systems Design magazine is that I'm able to spend a great deal of my time visiting systems builders who develop image processing systems that are deployed to inspect products in industrial environments.

During the course of my conversations with the engineers at these companies, I'm always intrigued to discover -- and later convey to the readers of our magazine -- how they integrate a variety of hardware components and develop software using commercially available image processing software packages to achieve their goals.

Although it's always intellectually stimulating to hear how engineers have built such systems, what has always interested me more are the reasons why engineers choose to use the hardware or software that they do.

Primarily, of course, such decisions are driven by cost. If one piece of hardware for example, is less expensive than another and will perform adequately in any given application, then it’s more likely than not to be chosen for the job.

The choice of software, on the other hand, isn't always down to just the price of the software package itself. If a small company has invested time and money training its engineers to create programs using one particular software development environment, it's highly likely that that same software will be chosen time after time for the development of any new systems. The cost involved in retraining engineers to learn enough about a new package might be simply too exorbitant, even though it might offer some technical advantages.

To ensure that they do not get stuck trapped with outmoded software, however, engineering managers at systems builders need to meet up with a number of image processing software vendors each year -- including the one that they currently use -- and ask them to provide an overview of the strategic direction that they plan to take in forthcoming years.

If it becomes clear during such a meeting that there is a distinct lack of such direction on the software vendor's part, then those engineering managers should consider training at least one of their engineers to use a new package that might more effectively meet the demands of their own customers in the future.

Certainly, having attended more than a few trade shows this year, it's become fairly obvious to me which software vendors are investing their own money in the future and which are simply paying lip service to the task. And if you don't know who I'm talking about, maybe you should get out more.

Monday, August 13, 2012

My big head

It's been known for quite some time that the overall size of the brain of an individual can be used to judge how intelligent he or she is. More specifically, it's been discovered that the size of the brain itself accounts for about 6.7 percent of individual variation in intelligence.

More recent research has pinpointed the brain's lateral prefrontal cortex, a region just behind the temple, as a critical hub for high-level mental processing, with activity levels there predicting another 5 percent of variation in individual intelligence.

Now, new research from Washington University in St. Louis suggests that another 10 percent of individual differences in intelligence can be explained by the strength of the neural pathways connecting the left lateral prefrontal cortex to the rest of the brain.

Washington University's Dr. Michael W. Cole -- a postdoctoral research fellow in cognitive neuroscience -- conducted the research that provides compelling evidence that those neural connections make a unique contribution to the cognitive processing underlying human intelligence.

The discovery was made after the Washington University researchers analyzed functional magnetic resonance brain images captured as study participants rested passively and also when they were engaged in a series of mentally challenging tasks, such as indicating whether a currently displayed image was the same as one displayed three images ago.

One possible explanation of the findings is that the lateral prefrontal region is a "flexible hub" that uses its extensive brain-wide connectivity to monitor and influence other brain regions. While other regions of the brain make their own special contribution to cognitive processing, it is the lateral prefrontal cortex that helps co-ordinate these processes and maintain focus on tasks at hand, in much the same way that the conductor of a symphony monitors and tweaks the real-time performance of an orchestra.

Now this discovery, of course, could have some important implications. Imagine for, example, a future where employers insisted that all their prospective employees underwent such a scan as part of their interviewing process so that they could ensure that  they always hired folks with lots of gray matter.

That thought might worry you, but not me. You see, my old man was always telling me that I had a big head. Then again, maybe he never meant his remarks to be taken as a complement.

Interested in reading more about the uses of magnetic resonance imaging in medical applications? Here's a compendium of five top news stories on the subject that Vision Systems Design has published over the past year.

1. MRI maps the development of the brain

Working in collaboration with colleagues in South Korea, scientists at Nottingham University (Nottingham, UK) aim to create a detailed picture of how the Asian brain develops, taking into account the differences and variations which occur from person to person.

2. Ultraviolet camera images the brain

Researchers at Cedars-Sinai Medical Center (Los Angeles, CA, USA) and the Maxine Dunitz Neurosurgical Institute are investigating whether an ultraviolet camera on loan from NASA's Jet Propulsion Laboratory could help surgeons perform brain surgery more effectively.

3. Imaging technique detects brain cancer

University of Oxford (Oxford, UK) researchers have developed a contrast agent that recognizes and sticks to a molecule called VCAM-1 that is present in large amounts on blood vessels associated with cancer that has spread to the brain from other parts of the body.

4. Imaging the brain predicts the pain


Researchers from the Stanford University School of Medicine (Stanford, CA, USA) have developed a computer-based system that can interpret functional magnetic resonance (fMRI) images of the brain to predict thermal pain.

5. Camera takes a closer look at the workings of the brain

Optical imaging of blood flow or oxygenation changes is useful for monitoring cortical activity in healthy subjects and individuals with epilepsy or those who have suffered a stroke.