Friday, November 30, 2012

Vegetables at Tiffany's

Over the past decades, many individuals faced with a lack of work in their own countries have migrated to somewhat more affluent countries to eke out a living.

Many of these folks have moved to countries in Europe and to the United States of America -- either legally or illegally -- to find work in agricultural jobs, performing the sorts of tasks that the residents of those countries would either find too demeaning or too low-paid to take up.

These agricultural jobs usually involving living and working on farms for long hours picking fruit and vegetables for a minimum wage. And although that minimum wage far exceeds what such folks might be able to earn in their own countries, one month's pay is usually barely enough to keep a roof over their head, let alone buy them a nice entrecôte tranchée at the L'Atelier de Joël Robuchon.

Now if things aren't tough enough for these poor migrant workers, they are made to feel even worse by political groups that insist that the menial jobs that they perform pulling potatoes and picking oranges have taken jobs from the natives of those countries, whose lives themselves have become naturally poorer to due the work opportunities that are no longer available.

The farmers in the US and Europe, of course, see things a bit differently. Without such low-paid workers, their produce would not be competitive with farmers from further afield. Indeed, in many cases, even though their pickers and pluckers are paid minimum wage, the farmers still find it hard to compete with other farmers around the world who employ their workers for even less money.

But it looks as if, in the not too distant future, that all of this is about to change, thanks to the deployment of robotic harvesting machinery that is under development in the US and the European Union.

Just this week, for example, Vision Systems Design reported on two new developments in the field (no pun intended). One of these was the development of a $6m project involving engineers at Purdue University and Vision Robotics who have teamed up to develop an automated vision-based grapevine pruner. The second was a fully automatic vision-based robotic system to harvest both white and violet asparagus that is being funded under a European grant.

These projects, of course, represent just the tip of the iceberg. Numerous other projects of a similar nature are development across the world that will revolutionize farming forever. Of course, it may make some time before such systems are perfected, but there's no doubt in my mind that given enough time and effort that they will be.

The future impact on the migrant workers, however, is less clear. Will they then return to their native countries where automation is less prevalent to seek work, or travel further afield? Sadly, whether they run to the west to Tulip, Texas or to the east to Somaliland, their future employment is all used up.

Wednesday, November 28, 2012

Seeking support

When discussing the design of any new vision system with systems integrators, I'm always intrigued to discover what specific hardware and software they chose to implement their systems.

More often than not, the two key reasons any product is chosen, of course, is based on its technical merits and its price. But there is always a third, and perhaps more important reason, that systems integrators opt for the products that they do. And that's the support that they receive from the distributor, or reseller, that sells them the product.

As one might expect, the distributor or reseller that fully comprehends, and can explain both the advantages -- and the disadvantages -- of his products, is more likely to win an order than one that is simply shipping products without much of an idea of how they work or how to integrate them into a system.

But these days, there is more to winning a sale than that. The distributor or reseller that also has some understanding of how his products will fit into the bigger scheme of things has an even greater advantage over those that simply have a basic knowledge of one or two product lines. Indeed, it is this more holistic approach that will almost guarantee that a product is specified into a new machine.

In one recent conversation I had with a systems integrator, he stated quite clearly that his choice of camera, system software and lighting products had been heavily influenced by the reseller that he discussed his machine vision needs with.

That reseller was obviously not only knowledgeable about many aspects of image processing, but was also wily enough to be able to leverage the expertise he passed along to the systems integrator into quite a lucrative sale.


In these strained economic times, however, many companies are reducing the number of experienced folks that they have on board in favor of younger, less well-paid individuals. Naturally enough, however, these folks haven't had enough years in the industry to be au fait with anything other than a basic understanding of their own company's product lines.

Fortunately for the systems integrator that I spoke to, he had been fortunate enough to find and work with a reseller that clearly understood the merits of hiring and keeping experienced multifaceted individuals who could assist him with the task of developing his vision system.

Wednesday, November 21, 2012

No vision at all

Anyone with an X-Box hooked up to a Kinect camera will appreciate the fact that gesture recognition has added all sorts of interactive possibilities to gaming that simply weren't possible before.

But a vision system isn't the only way of detecting the gestures of individuals to enable them to control computer systems, as one company proved this month when it launched an alternative gesture recognition technology that might challenge the role of vision in certain applications.

That company was none other than Microchip Technology (Chandler, AZ, USA), whose so-called GestIC system is based on the idea of equipping a device such as a tablet PC with a number of thin electrodes that create an electric field around the device when an electric current is passed through them.

Once a user's hand then moves into the area around the tablet, the electrical field distribution becomes distorted as the electrical field lines intercepted by the hand are shunted to ground through the human body. The distortion of the field is then detected by a number of receiver electrodes integrated onto the top layer of the device.

To support this concept, Microchip Technology has -- as you might have expected -- produced an integrated circuit named the MGC3130 that not only acts as a signal generator but also contains signal conditioning and analog to digital converters that convert the analog signals from the receivers into a digital format.

Once they are in that format, a 32-bit signal processor analyses the signals using an on-chip software suite that can track the x/y/z position of the hand as well as determine the gestures of a user. These are then relayed to an applications processor in the system that performs commands such as opening applications, pointing, clicking, zooming and scrolling.


While the folks at Microchip Technology believe that the GestIC system will enable the "next breakthrough in human-machine-interface design", and are touting the fact that it offers the lowest power consumption of any 3-D sensing technology -- the technology is still limited to a detection range of up to 15 cm.

So while it does offer an interesting alternative to a camera-based system, I don't think that the folks at Microsoft will be too worried that it will ever compete with their Kinect camera.

Samples of Microchip's MGC3130 -- which comes in a 5x5 mm 28-pin QFN package -- are available today. Volume production is expected in April 2013 at $2.26 each in high volumes. An evaluation kit is available today for $169. More information is available here.

Friday, November 16, 2012

The Italian goal

Those of you who traveled to last week's VISION 2012 show in Stuttgart might have noticed that I wasn't the only editor from Vision Systems Design to attend the event.

That's right. On my trip to Germany I was accompanied by none other than our European editor Dave who was also there to discover what was new, original and inventive in the vision systems business.

During his time at the show, I asked Dave to stop to chat with Signor Donato Montanari, the General Manager of the Vision Business Unit of Datalogic (Bologna, Italy), a company which -- as you may recall -- took over Minneapolis, MN-based PPT Vision last year.

I wanted Dave to find out how a large multinational company like Datalogic was faring in these precarious economic times, as well as to discover what new technical developments, if any, had taken place since the takeover.

On the European front, Dave was hardly surprised to hear that Datalogic vision business had remained pretty much flat this year, since most of Southern Europe is still in the economic doldrums. But Signor Montanari painted a much more optimistic picture of his company's fortunes in the US and Asia. Thanks to the fact that the entire US Datalogic sales force had been brought to bear to sell the new vision product line, business was up ten percent this year in the US and a whopping forty percent in Asia.

But what of the technology that Datalogic inherited, I hear you ask? Well, apparently, there have been some changes there too. While the old PPT had subcontracted out the manufacture of its cameras, the Datalogic management has now brought the manufacturing process in-house.

But that's not all. On the hardware front, the PPT cameras that were based on an embedded PC architecture have now been redesigned and rebuilt based on digital signal processors, resulting in a subsequent cost reduction. And, in a six month effort, the existing PPT drag and drop vision programming software environment has been ported over to them.


Now, as many of you may know, PPT had a rather interesting business model with respect to its cameras and software. If you bought cameras from the company, the software development environment was provided for free. For the time being, it appears as if Datalogic will be keeping to that model.

But next year Signor Montanari said that Datalogic had plans to make the integration of third party software into its software development environment a whole lot easier for engineers than it has been in the past. And he also said that the company was beefing up its technical support centers across the globe to offer the capability of customizing the PPT software for specific customer applications.

Whether the company becomes a dominant player in the machine vision business still remains to be seen. But from listening to Signor Montanari speak, Dave seems convinced that it's a goal that the Italians will be trying their best to achieve.

Wednesday, November 14, 2012

Vision 2012: A Space Odyssey

According the most recent figures released by its organizers, the Stuttgart VISION 2012 show was still the place to be seen for those involved in the machine vision industry. Testifying to that fact, more than 7,000 visitors attended the 25th anniversary of the show last week, roughly the same number that showed up last year.

Unlike previous years, this year all the exhibitors found themselves under one roof in the L-Bank Forum of the Stuttgart exhibition center. And there were plenty of them for the attendees to visit too -- the 25th anniversary of the show saw no less than 372 exhibitors parading their wares – an increase on the 351 exhibitors that attended the show last year.

And what a sight it was too. Unlike previous years, many smaller- to medium-sized companies had opted for much larger booths at this year's show. In a clear attempt to impress the attendees and outdo their competition, they found themselves cheek by jowl with more well established outfits, dwarfing them with booths that appeared to be almost as high as the Bradbury Building.

There was an increase in the number of those exhibitors that came from outside Germany this year too. While last year saw just 46 per cent of those exhibiting come from further afield, this year, the figure was up to 49 per cent. Representing 32 countries in all, the exhibitors brought with them cameras, vision sensors, frame grabbers, software tools, illumination systems, lenses, accessories as well as complete machine vision systems.

Of the attendees to the show, the organizers say that 85 per cent were involved in purchasing and procurement decisions in their company. As you might expect, most of them were primarily interested in machine vision components and applications. But an increasing number of visitors expressed an interest in turnkey machine vision systems as well.


Aside from checking out the new products on display, the VISION show was also a place where one could gain some insight into how vibrant the vision system industry is. At the VISION Press lunch held on Tuesday November 6, for example, Dr. Olaf Munkelt, the Managing Director of image processing software vendor MVTech Software and Chairman of the Executive Board of the VDMA Machine Vision Group presented an overview of the state of the German machine vision market.

The figures he showed highlighted the fact that the total turnover for machine vision systems in Germany was expected to remain pretty much flat this year, with a growth of just two percent in 2013. But there was better news from outside Germany, where orders for machine vision systems were predicted to rise at a somewhat higher rate.

But not every company is experiencing low growth rates. One executive that I ran into on my way back to England from VISION 2012 claimed that his company had experienced a remarkable 20 per cent growth in orders this year, a trend he clearly expected to continue next year as he was actively looking to hire a number of engineers to meet the demand for his products.

Next year, the VISION 2013 will be staged two months earlier from September 24 to 26 2013, so none of us will have quite as long to wait to get our next dose of machine vision technology.

But will that be long enough for those involved in our industry to really develop any game changing technology? One company owner I spoke to didn't think so. He said that his outfit would be doing no more than demonstrating the same products that he displayed this year. By then, he said, at least his engineering team might have had time to iron out all the bugs in them!

Friday, November 2, 2012

Visions of the future

Twenty five years ago, a machine vision system that performed a simple inspection task may have cost $100,000. Today, a similar system based around a smart camera can perform the same task for $3,000.

The decrease in the price of the sensors, processors and lighting components used to manufacture such systems has been driven by the widespread deployment of those components in high-volume consumer products. And that trend is likely to continue into the future.

As the cost of the hardware of such systems has decreased, so too have the capabilities of integrated software development environments. As such, rather than hand code their systems from scratch, designers can now choose from a variety of software packages with large libraries of image processing functions which they can use to program their systems.

The combination of inexpensive hardware and easy to use programming tools has enabled OEM integrators to develop systems in a much shorter period of time than ever before, offering them the possibility of developing several systems each year for customers in a variety of industries.

The result of the decreased price of hardware and the ease of use of many software packages has also allowed many sophisticated end users to take on the role once performed by the systems integrator, developing their own systems in house rather than turn to outside expertise.

Over the next ten years, engineers can expect to see more of the same. As the system hardware decreases in price, however, they can also expect to see companies develop more highly specialized processors, sensors and lighting systems in an attempt to differentiate their product lines from those of their competition.

On the software front, developers will continue to refine the capabilities of their software packages as well as adding greater capabilities while driving down the cost of deployment by offering subsets of their full development environments in the form of software apps to their customers.


As 3-D hardware and software becomes more prevalent, designers will also be challenged to understand how capturing and processing images in 3-D might enable them to develop more complex systems to tackle their vision systems applications.

In the December issue of Vision Systems Design, I'll be bringing out my crystal ball to see if I can predict some more emerging trends in the field of machine vision. Be sure to pick up your copy when it lands on your doormat.