US 7263451 B1
A method for correlating semiconductor process data analyzes a semiconductor device that has been treated by a process, to produce process data related to the process. The data is converted into an image pattern, and automatic image retrieval is used to identify other devices having similar images. The process data is then correlated with prior process data of the other devices having the similar images.
1. A method for correlating semiconductor process data, comprising:
analyzing a semiconductor device, that has been treated by a process, to produce process data related to the process;
converting the data into an image pattern;
using automatic image retrieval to identify other devices having similar images;
correlating the process data with prior process data of the other devices having the similar images; and
outputting the result of the correlation.
2. The method of
3. The method of
4. The method of
5. The method of
6. A method for correlating semiconductor process data, comprising:
analyzing a semiconductor wafer for defect data;
converting the data into a static image pattern;
using automatic image retrieval to identify other wafers having similar defect image patterns;
correlating the process data of the semiconductor wafer with prior process data of the other wafers having the similar images; and
outputting the result of the correlation.
7. The method of
8. The method of
9. The method of
10. The method of
11. Apparatus for correlating semiconductor process data, comprising:
means for analyzing a semiconductor device, that has been treated by a process, to produce process data related to the process;
means for converting the data into an image pattern;
means for using automatic image retrieval to identify other devices having similar images;
means for correlating the process data with prior process data of the other devices having the similar images; and
means for outputting the result of the correlation.
12. The apparatus of
13. The apparatus of
14. The apparatus of
15. The apparatus of
16. Apparatus for correlating semiconductor process data, comprising:
means for analyzing a semiconductor wafer for defect data;
means for converting the data into a static image pattern;
means for using automatic image retrieval to identify other wafers having similar defect image patterns;
means for correlating the process data of the semiconductor wafer with prior process data of the other wafers having the similar images; and
means for outputting the result of the correlation.
17. The apparatus of
18. The apparatus of
19. The apparatus of
20. The apparatus of
1. Technical Field
The present invention relates generally to semiconductor technology and more specifically to semiconductor research, development, and production management.
2. Background Art
Electronic products are used in almost every aspect of life, and the heart of these electronic products is the integrated circuit. Integrated circuits are used a wide variety of products, such as televisions, telephones, and appliances.
Integrated circuits are made in and on silicon wafers by extremely complex systems that require the coordination of hundreds or even thousands of precisely controlled processes to produce a finished semiconductor wafer. Each finished semiconductor wafer has hundreds to tens of thousands of integrated circuits, each worth hundreds or thousands of dollars.
The ideal would be to have every one of the integrated circuits on a wafer functional and within specifications, but because of the sheer numbers of processes and minute variations in the processes, this rarely occurs. “Yield” is the measure of how many “good” integrated circuits there are on a wafer divided by the maximum number of possible good integrated circuits on the wafer. A 100% yield is extremely difficult to obtain because minor variations, due to such factors as timing, temperature, and materials, substantially affect a process. Further, one process often affects a number of other processes, often in unpredictable ways.
In a manufacturing environment, the primary purpose of experimentation is to increase the yield. Experiments are performed in-line and at the end of the production line with both production wafers and experimental wafers. However, yield enhancement methodologies in the manufacturing environment produce an abundance of very detailed data for a large number of wafers on processes subject only to minor variations. Major variations in the processes are not possible because of the time and cost of using production equipment and production wafers. Setup times for equipment and processing time can range from weeks to months, and processed wafers can each contain hundreds of thousands of dollars worth of integrated circuits.
The learning cycle for the improvement of systems and processes requires coming up with an idea, formulating a test(s) of the idea, testing the idea to obtain data, studying the data to determine the correctness of the idea, and developing new ideas based on the correctness of the first idea. The faster the correctness of ideas can be determined, the faster new ideas can be developed. Unfortunately, the manufacturing environment provides a slow learning cycle because of manufacturing time and cost.
Recently, the great increase in the complexity of integrated circuit manufacturing processes and the decrease in time between new product conception and market introduction have both created the need for speeding up the learning cycle.
This has been accomplished in part by the unique development of the integrated circuit research and development environment. In this environment, the learning cycle has been greatly speeded up and innovative techniques have been developed that have been extrapolated to high volume manufacturing facilities.
To speed up the learning cycle, processes are speeded up and major variations are made to many processes, but only a few wafers are processed to reduce cost. The research and development environment has resulted in the generation of tremendous amounts of data and analysis for all the different processes and variations. This, in turn, has required a large number of engineers to do the analysis. With more data, the answer always has been to hire more engineers.
However, this is not an acceptable solution for major problems. For example, during the production of semiconductor devices, in-line failure or defect inspections are conducted to obtain defect data about the devices. In-line defects are detected by inspection techniques conducted between process steps for fabricating the semiconductor devices. (Actual defects are determined later using electrical tests after the chip fabrication is completed.) The defect data is typically collected by laser scanning, optical, or scanning electron microscope (“SEM”). Defects may include a plurality of different events that may have very different respective impacts on chip yield. Any irregularities such as structural imperfections, particles, residuals, or embedded foreign material are considered as defects.
The inspection techniques often result in a total count of the number of defects detected in each process step, but not an abundance of in-depth or specific defect data. Total count information alone is not adequate for assigning good yield loss projections to defects detected at each particular process step.
It is common practice in the semiconductor industry, however, to inspect wafers at various times by employing inspection tools during production. The better the inspections, the better the data that can potentially shorten yield learning cycles by making it possible to react quickly to process problems. The process engineer therefore needs to know the number of defects per wafer, the x-y coordinates of each defect, and a set of parameters (different for different tools) specific for each particular defect. To obtain yield impact projections, it is then desirable to correlate the actual defect data to actual electrical failures. Such data can be crucial for maximizing yields of a product.
Speed is also critical for efficient manufacturing. Reviewing all the inspected defects, even using known automated classification, can significantly delay yield learning cycles and the subsequent manufacturing process for the semiconductor devices.
Therefore, a need exists for a system and method for correlating in-line defect data with prior known defect and yield data to determine a yield loss projection for each defect inspected wafer, and to suggest corresponding process anomalies associated with such defects so that appropriate process adjustments and corrections can be made.
Solutions to these problems have been long sought but prior developments have not taught or suggested any solutions and, thus, solutions to these problems have long eluded those skilled in the art.
The present invention provides a method for correlating semiconductor process data. A semiconductor device, that has been treated by a process, is analyzed to produce process data related to the process. The data is then converted into an image pattern. Automatic image retrieval is then used to identify other devices having similar images. The process data is then correlated with prior process data of the other devices having the similar images.
Certain embodiments of the invention have other advantages in addition to or in place of those mentioned above. The advantages will become apparent to those skilled in the art from a reading of the following detailed description when taken with reference to the accompanying drawings.
Referring now to
The four fundamental blocks are a generation block 101, an extraction block 102, an analysis block 103, and a presentation block 104. Each of the blocks can stand independently in the tester information processing system 100, and within these blocks are various commercially available techniques, methodologies, processes, and approaches as well as the invention disclosed herein. The four fundamental blocks are discussed in the approximate chronology that the blocks are used in the tester information processing system 100.
The tester information processing system 100 includes various pieces of commercially available production, test, research, and development semiconductor equipment, which operate on and manipulate information and/or data, which are generically defined herein as “information”. The tester information processing system 100 receives information from a tester 105, which is connected to a system-under-test 106.
In the integrated circuit field, the tester 105 can be a semiconductor test system for testing wafers or die and the system-under-test 106 can be anything from a complete wafer down to an element of an individual semiconductor device on a die.
In the generation block 101, basic information is generated looking at new and old products, new and old processes, product and process problems, unexpected or unpredictable results and variations, etc. Generation of the information may use the tester 105 itself, conventional test information, a personal computer, etc. It may also require new equipment and/or methods, which are described herein when required.
In the extraction block 102, usable information is extracted from the generated information from the generation block 101. Essentially, the generated information is translated into more useful forms; e.g., broken apart so it can be reassembled in different forms to show different inter-relationships.
For example, most testing equipment provides raw data in massive test files. Sometimes, millions of measurements provide millions of pieces of information, which must be digested and understood. The test files seldom have a user-friendly tabular output of parameter and value. Even where somewhat user-friendly outputs are provided, there are problems with the proper schema for storing the usable data and for formatting the data for subsequent analysis.
Extraction of the usable information may also require new equipment and/or methods. Sometimes, extraction includes storing the information for long duration experiments or for different experiments, which are described herein when required.
In the analysis block 103, the usable information from the extraction block 102 is analyzed. Unlike previous systems where a few experiments were performed and/or a relatively few data points were determined, the sheer volume of experiments and data precludes easy analysis of trends in the data or the ability to make predictions based on the data. Analysis of the extracted information may also require new equipment and/or methods, which are described herein when required.
In the presentation block 104, the analyzed information from the analysis block 103 is manipulated and presented in a comprehensible form to assist others in understanding the significance of the analyzed data. The huge amount of analyzed information often leads to esoteric presentations, which are not useful per se, misleading, or boring. Proper presentation often is an essential ingredient for making informed decisions on how to proceed to achieve yield and processing improvements. In some cases, problems cannot even be recognized unless the information is presented in an easily understood and digested form, and this often requires new methods of presentation, which are described herein when required.
In the production of semiconductor devices, each process step must be developed and stabilized as quickly as possible. It is therefore essential to perform failure analysis rapidly, efficiently, and effectively, so that the results of the failure analysis can facilitate quick repair of the process defect that caused the failure.
Various kinds of failure analysis are performed with respect to each individual semiconductor device. In one process, a “bitmap signature”, or simply “bitmap”, is generated in which the distribution condition of passes and failures is mapped and displayed in accordance with the arrangement of the subject semiconductor devices.
With the development of very large scale integration (“VLSI”) devices, and ever larger and larger wafers, bitmaps have developed into data structures holding vast amounts of data. Identification of failure modes has become correspondingly more complicated. Furthermore, when analyzing the causes of the defects, the occurrence conditions must typically be analyzed for each individual fail bit. Therefore, as the bitmaps become larger, the bitmap processing time greatly increases, and analysis efficiency is correspondingly greatly reduced.
For example, a bitmap for a 128 MB semiconductor memory can require as much as 27 bytes to identify a particular bit address. If the semiconductor memory contains one KB of failed bits scattered across the chip, 27 KB of memory capacity will be required to record the bitmap. For a wafer having 260 chips, this then requires 675 KB, and for one lot of 50 wafers, 33 MB of memory capacity will be required. As such information accumulates for many lots over time, the relevant bitmap database becomes enormous. Consequently, a very large amount of computation time becomes necessary to extract the failure mode. Computational processes must often be repeated thousands upon thousands of times.
Referring now to
In one wafer defect bitmap representational embodiment, the bitmap is displayed with a layout and appearance similar to the drawing of
Referring now to
While the defect patterns in
In the analysis of process-caused wafer defects, an important and frequently employed analytical tool is the comparison of bitmap signatures across multiple wafers and wafer lots. Bitmap analysis currently employs a manual classification process using a pre-defined set of classification codes. Unfortunately, such a bitmap analysis process is slow, computation intensive, potentially limiting, and often inaccurate. A need therefore continues for better methods and systems for easily, quickly, and accurately comparing bitmap signatures across multiple wafers and wafer lots.
In one embodiment according to the present invention, this need is met by using automatic image retrieval. Thus, instead of treating the bitmap information as a collection of data points (e.g., the points represented by the crosses 208 (
Thus, each wafer forms the system-under-test 106 (
In the analysis block 103, the defect bitmaps are converted into static images that are compared with other prior images using A.I.R. to identify other wafers or dies having similar defect patterns or signatures. The prior comparison images may be stored within the analysis block 103 or externally thereto, as appropriate. The stored data also associates the prior wafers or dies with the corresponding defects and process anomalies known to have occurred in these prior wafers or dies. In this manner, the A.I.R. quickly identifies these anomalies as the anomalies that are most likely to have lead to the defects in the current wafer under test.
The analytical results from the analysis block 103 are then output through the presentation block 104, thereby providing a quick, efficient, and accurate correlation of in-line defect data with prior known defect and yield data.
A method according to the present invention thus correlates semiconductor process data, such as wafer defect data, with known prior process data. The method thereby correlates related process signatures, such as bitmap signatures, of semiconductor devices that have been treated by a semiconductor manufacturing process. The correlation is accomplished by converting the process data into images and using A.I.R. to identify other devices having similar image patterns. The correlation then predicts that the causes of the current defects and process data correspond to the known causes for the other devices that have the similar image patterns.
Yield learning cycles are therefore accelerated by the present invention, defect causes are more quickly identified, and corrective yield impact projections are promptly and accurately generated. The corresponding manufacturing process problems are then corrected and optimized more quickly, and process yields are correspondingly improved more rapidly.
It will be readily understood, based upon this disclosure, that the same methodology and equipment of the present invention may also be used to analyze other semiconductor process data types that are currently treated as collections of individual data points. Examples include Vt distributions, IV curves, and similar data point collections. The result is much faster and more accurate analyses that advantageously avoid current limitations such as manual classification, pre-defined (and thus potentially limiting) sets of classification codes, intensive computation, and so forth.
Referring now to
While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the spirit and scope of the included claims. All matters hither-to-fore set forth or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense.