What are the questions we should ask before diving in?
To deliberately misquote and mangle Shakespeare once again, I come to praise AI, not to bury it, but does the potential evil it may do live after and the good oft interred in the dataset?
I apologize, but … discussion of the benefits of AI in all manner of applications has been the flavor of the month for much of the last two years, and there seems no end in sight! It has been one of the drivers of processor manufacture and use in recent times. However, two recent articles from BBC News seemed to highlight some pros and cons regarding use of AI for x-ray inspection and test.
The first1 describes how AI has been trained to best radiologists examining for potential issues in mammograms, based on a dataset of 29,000 images. The second2 is more nuanced and suggests that after our recent “AI Summer” of heralded successes on what could be considered low-hanging fruit, we might now be entering an AI Autumn or even an AI Winter. In the future, it suggests, successes with more complex problems may be increasingly difficult to achieve, and attempts are made only due to the hype of the technology rather than the realities of the results.
For x-ray inspection and AI, it is important to distinguish between what are, say, 1-D and 2-D AI approaches. 1-D AI, I suggest, is predictive information from huge quantities of data from which patterns can be far more easily discerned than by a human observer. It is based on individual data fields, such as those garnered from shopping and social media sites: for instance, a pop-up advertisement for something under consideration for purchase, or inferences on one’s political and societal alignments based on their social media selections and feeds. We may actively or passively choose to provide such information, and its predictive abilities may be construed as for good or ill.
In 2-D AI, identification and pass/fail analysis are based on images, as is required for x-ray inspection. This approach, I believe, raises whole levels of complexity for the AI task. Thus, I have the following questions about how 2-D AI will work with the x-ray images we capture for electronics inspection:
As a comparison for electronics x-ray inspection today, consider the BBC example of analyzing mammograms.1 They use a large number of images, I assume of similar image quality, on fields of view that are, I understand, painfully obtained to attempt to achieve more consistency through making a narrower range of tissue thickness across the dataset! In electronics applications, do we have a sufficient quantity of similar images for our board/component analysis AI to produce a reasonable test? Is there more variability in the positional and density variation possible for our electronics under test compared with a mammogram?
What does this mean for x-ray inspection of electronics? Already many equipment suppliers are offering AI capabilities. But what are the questions we should ask about this amazing technology and its suitability and capabilities for the tasks we must complete? We don’t know precisely the algorithm used for our tests. We are not certain of having sufficient (and equivalent) images or how many are needed. Adding more over time should give better analysis – a larger dataset – but are they consistent with what went before and, if not, are they materially changing the algorithm in a better way, indicating some escapes may have been made in the past? Sophistry perhaps. But if we do not know what the machine algorithm is using to make its pass/fail criteria, are we satisfied with an “it seems to work” approach?
The more complicated the image, the larger the dataset needed to provide a range of pass/fail examples with an inspection envelope. Variability of the mammograms and the 29,000 images used may well lie within a narrower variation envelope than the BGA in Figures 2 and 3. Perhaps AI for electronics is best suited today for where the image is much simpler and small, as in Figure 1. Automatically identifying variations in the BGA solder balls would naturally be assumed to be better undertaken by the AI approach, but does the variability of the surrounding information affect the pass/fail AI algorithm? Add the potential for movement of components during reflow, warpage of boards, etc., and we have more variability in the placement of the features of interest, perhaps requiring a larger image dataset to cover this larger range of possibilities. Then consider adding an oblique view for analysis and these variabilities further enlarge the envelope of possibilities. How many images do you need to get an acceptable test: 100, 1,000, 10,000, 100,000? A bad test is worse than no test, as you are then duplicating effort. Will you get that many images in the test run of the boards you make? And, outside of cellphones, PCs, tablets and automotive (?) applications, is there sufficient product volume to obtain the necessary dataset? If AI tests are applied for automotive applications, is there a complicated equation for quality of test vs. sufficient volume of product vs. complexity/quality of image over time vs. safety critical implications, should an escape occur?
I have asked too many questions and admittedly do not have the answers. I hope the AI experts do and can advise you accordingly. Perhaps the best approach now is to use AI if you are comfortable doing so and consider the images you will be using for your AI test. The simpler the image, the quicker and better the results perhaps, but does that describe the imaging problem you want your AI to solve? Consider the opportunity for using PCT to obtain images at discrete layers within a board’s depth to improve the AI analysis by decluttering the images from overlapping information. However, are you at the right level in the board if there is warpage? And are you prepared to take at least 8x as long (and perhaps substantially longer) for the analysis? Because a minimum of eight images will be needed to create the CT model from which the analysis can be made.
There are definitely x-ray inspection tasks in electronics for which AI can be considered and used today. Is it your application? Or is it as Facebook AI research scientist Edward Grefenstette says,2 “If we want to scale to more complex behavior, we need to do better with less data, and we need to generalize more.” Ultimately, does any of this matter if we accept that AI is only an assistant rather than the explicit arbiter of the test? AI as the assistant may well be acceptable if the confidence level of the result matches or is better than human operators. However, can or should AI be used without such oversight today, or even in the future, based on what can be practically achieved? Whatever the decision, I predict this is likely to give metrologists endless sleepless nights!
References
1. Fergus Walsh, “AI ‘Outperforms' Doctors Diagnosing Breast Cancer,” BBC News, Jan. 2, 2020, bbc.co.uk/news/health-50857759.
2. Sam Shead, “Researchers: Are We on the Cusp of an ‘AI Winter’?,” BBC News, Jan. 12, 2020, bbc.co.uk/news/technology-51064369.
3. D-ID, deidentification.co.
Au.: Images courtesy Peter Koch, Yxlon International
, is an expert in use and analysis of 2-D and 3-D (CT) x-ray inspection techniques for electronics;