Industry News, Trends and Technology, and Standards Updates

Alan Weber: Vice President, New Product Innovations

Alan Weber is currently the Vice President, New Product Innovations for Cimetrix Incorporated. Previously he served on the Board of Directors for eight years before joining the company as a full-time employee in 2011. Alan has been a part of the semiconductor and manufacturing automation industries for over 40 years. He holds bachelor’s and master’s degrees in Electrical Engineering from Rice University.
Find me on:

Recent Posts

The 19th Annual European APC Conference is in the books!

Posted by Alan Weber: Vice President, New Product Innovations on Apr 23, 2019 10:34:00 AM

apcm20191Cimetrix participated in the recent European Advanced Process Control and Manufacturing (apc|m) Conference, along with over 150 control professionals across the European and global semiconductor manufacturing industry. This site of this year’s conference was Villach, Austria, a picturesque town nestled in the eastern Alps just north of the Italian border in the state of Carinthia. This region is home to a number of high-tech companies and institutions all along the semiconductor manufacturing value chain, and since it was the first time the conference was held in Villach, the local hosts rolled out the red carpet. apcm20192-2

This conference, now in its 19th year and organized by Silicon Saxony, is one of only a few global events dedicated to the domain of semiconductor process control and directly supporting technologies. As usual, the conference was very well organized, and featured a wide range of high-quality presentations, keynote addresses, and tutorial sessions. The supplier exhibits associated with this year’s event were especially numerous, as were the technical posters displayed in the exhibition area just outside the conference rooms.

As in many prior years, Cimetrix was privileged to present at this conference, as Alan Weber delivered a talk entitled “Addressing Connectivity Challenges of Disparate Data Sources in Smart Manufacturing.” The presentation highlighted the need for unifying data collection concepts—like explicit equipment models and generic structures for data collection plans—are increasing necessary for maintaining the fidelity of a factory’s “digital twin” in Smart Manufacturing settings where the number of data source types is growing. This presentation resonated with a number of the key conference themes, so if you want to know more, feel free to download a copy of the entire presentation from our web site.

apc20193-1Other highlights of the conference included:

  • An update by Otto Graf on the ambitious vision and progress of the BOSCH 300mm wafer fab now under construction in Dresden. In this talk he emphasized the role that digital technologies will play in bringing up the fab and climbing the yield ramp and other features of a wall-to-wall Industrie 4.0 implementation. apcm20194-1
  • “The Role of APC and Smart Manufacturing / Industrie 4.0 in New Reliability-Critical Markets“ by James Moyne (University of Michigan / Applied Materials) – James re-presented a number of the Smart Manufacturing technologies in the context of automotive industry requirements, especially the role that Subject Matter Expertise (i.e., people!) will play alongside other emerging technologies. He also pointed out that the Factory Integration chapter of the International Roadmap for Devices and Systems (IRDS) will be reorganized around the key tenets of Smart Manufacturing.

  • A thought-provoking invited talk from Dr. Roman Kern of the KNOW-CENTER titled “Possibilities and Challenges of Digitalization in the Semiconductor and Other Domains.” His key messages started with “Big Data is the new oil…. AI is the new electricity… and Data Science is the new lingua franca for leading global industries,” and then he went deeper into all of these.

  • Dr. Germar Schneider of Infineon Technologies built on the theme above in a practical setting with his “Chances and Challenges of Digitization in Semiconductor Fabs and Success Factors during the implementation” presentation. This was not only an in-depth look at some of the multi-year efforts at Infineon, but also included a summary of current digitization projects across the European manufacturing R&D community. 

  • apcm20195-1Another invited talk from BMW was delivered by Rainer Hohenhoff which covered “Product Data and Product Life Cycle Management in the face of new business models of the automotive industry.” In short, it discussed many of the ways a car company might make money even after people stop buying as many cars as they do today… and what collisions (pun intended) you could expect in the market as service companies like Google, Amazon, UBER, and others converge on the transportation consumer. 

There were poignant moments as well. After 19 years of personal dedication to this event, both Gitta Haupold of Silicon Saxony and Dr. Klaus Kabitzsch, Program Committee Chair from Technical University of Dresden are retiring. They will definitely be missed!

apcm20196-1The insights gained from these and the other 30+ presentations are too numerous to list here, but in aggregate, they provided an excellent reminder of how relevant semiconductor technology has become for our comfort, sustenance, safety, and overall quality of life. 

This conference and its sister conference in the US are excellent venues to understand what manufacturers do with all the data they collect, so if this topic piques your interest, be sure to put these events on your calendar in the future. In the meantime, if you have questions about any of the above, or want to know how equipment connectivity and control fit into the overall Smart Manufacturing landscape, please contact us!

Contact Us

Topics: Industry Standards, Semiconductor Industry, Doing Business with Cimetrix, Events, Smart Manufacturing/Industry 4.0

The Giga Factory Minute Series: Industry Drivers

Posted by Alan Weber: Vice President, New Product Innovations on Feb 27, 2019 1:19:00 PM

Giga FactoryIt’s time for another episode in our Giga Factory Minute series... And in keeping with the theme of moving around the clock, we see that the focus of this month is “process steps completed.” However, rather than focus on manufacturing processes, we’ll use this opportunity to highlight an important industry process that is underway. Specifically, I’m referring to the role that the automotive market has in quite literally “driving” important segments of the semiconductor and electronics industries. Even as portions of the industry forecast a slowdown over the next 6-9 months, those in the automotive sectors are busier than ever.

From a wafer fab standpoint, one of the biggest news items over the past 6 months has been announcement, groundbreaking, and construction of a new facility in Dresden, namely the Bosch RB 300 wafer fab. The automation aspects of this factory were featured in a very engaging presentation by Otto Graf (Managing Director, Robert Bosch Semiconductor Manufacturing Dresden GmbH) at the recent Innovationsforum for Automation in Dresden, Germany. 

A modern automobile is brimming with electronics, as you can see from the systems highlighted in the figure below. (Image courtesy of chipsetc.com)chipsetc

Every function in a car, from engine control to seating to headlamps to collision avoidance is getting smarter… and this is a welcome sight for the scores of companies that provide the components that realize these functions.

RedSmartFactory_225But the full impact of automotive electronics includes all the infrastructure technologies external to the car, such as 5G telecommunications, “smart” roads and traffic signals, routing and congestion management systems for major cities, satellite systems that provide GPS information, entertainment content providers for the non-drivers, and law enforcement, just to name a few. And as driverless cars approach commercial feasibility, the scope and importance of these systems increase significantly.

In this context, anyone who thinks the “good old days” of the semiconductor and electronics industries are behind us isn’t paying attention -- so buckle up and prepare to enjoy the ride!

If your company plays a role in the manufacturing aspects of this exciting market, and you are struggling with how to address the equipment control and connectivity challenges you face, give us a call. We’ve got people who can help you make sense of it all, and products that can transform your problems into solutions.

EDA Implementation Insights: Competitive Differentiation

Posted by Alan Weber: Vice President, New Product Innovations on Feb 13, 2019 11:50:00 AM

people arrowIn the first blog of this series, Clare Liu of Cimetrix China made the compelling case for choosing a commercial software platform for implementing the equipment side of the EDA (Equipment Data Acquisition) standards interface rather than developing the entire solution in-house. 

Whenever this “make vs. buy” decision is discussed, however, the following question inevitably arises: “If we choose a standard product for this, how can we differentiate the capabilities of our equipment and its data collection capability from our competitors?” It’s a great question which deserves a well-reasoned answer.

Platform Choice and System Architecture

Most advanced fabs use EDA to feed their on-line FDC (Fault Detection and Classification) applications, which are now considered “mission-critical.” This means if the FDC application is down for any reason, the equipment is considered down as well. It is therefore important to choose a computing platform for the EDA interface that is highly reliable and has enough processing “headroom” to support the high bandwidth requirements of these demanding, on-line production applications. Moreover, this platform should not be shared by other equipment communications, control, or support functions, since these may adversely impact the processing power available for the EDA interface. 

Surprisingly, this approach is not universally adopted, and has been a source of problems for some suppliers, so it is an area of potential differentiation. 

Adherence to Latest Standards 

gold-thumbs-upThe automation requirements for the most advanced fabs call for the latest versions (Freeze II) of all the standards in the EDA suite, including the EDA Common Metadata (E164) standard. Dealing with older versions of the standard in the factory systems creates unnecessary work and complexity for the fab’s automation staff, so it is best to implement the latest versions from the outset. The Cimetrix CIMPortal Plus product makes this a straightforward process using the model development and configuration tools in its SDK (Software Development Kit), so there is absolutely no cost penalty for providing the latest generation of standards in your interface.

It takes time and effort for equipment suppliers with older versions of the standards to upgrade their existing implementations, so this, too, is an opportunity for differentiation.

Equipment Metadata Model Content

This is probably the area with the largest potential for competitive differentiation, because it dictates what a factory customer will ultimately be able to do with the interface. If an equipment component, parameter, event, or exception condition is not represented in the equipment model as implemented in the E120 (Common Equipment Model) and E125 (Equipment Self-Description), and E164 (EDA Common Metadata) standards, the data related to that element cannot be collected. In effect, the metadata model IS the data collection “contract” between the equipment supplier and the fab customer.

eye-with-maglassThis is why the most advanced fabs have been far more explicit in their automation purchase specifications with respect to equipment model content, going so far as to specify the level of detailed information they want to collect about process performance, equipment behavior, internal control parameters, setpoints and real-time response of common mechanisms like material handling, vacuum system performance, power generation, consumables usage, and the like. This level of visibility into equipment operation is becoming increasingly important to achieve the required yield and productivity KPIs (Key Performance Indicators) for fab at all technology nodes.

The argument about “who owns this level of information about equipment behavior” notwithstanding, providing the detailed information the fabs want in a structure that makes it easy to find and access is a true source of differentiation.

Self-Monitoring Capability

If you really want to set your equipment apart from your competitors, consider going well beyond simply providing access to the level of information needed to monitor equipment and process behavior and include “built-in” Data Collection Plans (DCPs) that save your customers the effort of figuring out what data should be collected and analyzed to accomplish this. Your product and reliability engineering teams probably already know what the most prevalent failure mechanisms are and how to catch them before they cause a problem… why not provide this knowledge in a form that makes it easy to deploy?

A few visionary suppliers are starting to talk about “self-diagnosing” and “self-healing" equipment… but it will be a small and exclusive group for a while – join them.

Readiness for Factory Acceptance

checklistBefore the fab’s automation team can fully integrate a new piece of equipment, it must follow a rigorous acceptance process that includes a comprehensive set of interface tests for standards compliance, performance, and reliability. This process is vital because solid data collection capability is fundamental for rapid process qualification and yield ramp that shorten a new factory’s “time to money.” If you know what acceptance tests and related software tools the fab will use (which is now explicit in the latest EDA purchase specifications), you can purchase the same software tools, perform and document the results of these same tests before shipping the equipment. 

This will undoubtedly speed up the acceptance process, and your customers will thank you for the effort you took to put yourself in their shoes. Incidentally, this usually means the final invoice for the equipment will be paid sooner, which is always a good thing.Red_smart_factory-TW

In Conclusion

In this posting, we have only scratched the surface regarding the sources of competitive differentiation. As you can see, choosing a commercial platform enables this far more readily than the in-house alternative, because it allows your development team to focus on the topics above rather than worrying about compliance to the standards. If you’d like to know more, please give us a call or click below to talk schedule a meeting. 

Contact Us

Topics: Industry Standards, EDA/Interface A, Doing Business with Cimetrix, Smart Manufacturing/Industry 4.0, Cimetrix Products

The Giga Factory Minute Series Introduction: What to Watch for in 2019

Posted by Alan Weber: Vice President, New Product Innovations on Jan 17, 2019 11:05:00 AM

Gigafab-Minute-1We introduced the Giga Factory Minute concept last year to highlight the impact that standards have in orchestrating the entire manufacturing process, from releasing unpatterned wafers into the line (1:00 on the figure) to the shipment of good die to the downstream assembly/test facilities (12:00). This year, we’ll use this same diagram to identify important industry trends, technologies, events, or other items of interest to our subscribers. Since there are 12 “hours” on the diagram, watch for a posting every month related to the topic in that segment.

January 2019

Since this is January, we’ll focus on the more general topic of electronics manufacturing product materials, of which “wafer starts” is the specific material type that begins the 4-month journey through the wafer fab.

In the early days of the automated factory industry, there were only a few material form factors to deal with… even when you go all the way back to the raw silicon and forward to the finished electronic product. (You can see most of these on the “Sand to Systems” infographic here.)
However, now that semiconductors have found their way into virtually every major industry on the planet, from computers to entertainment to transportation to agriculture to wearables and even to “ingestibles,” the automated material handling challenges across this product diversity have exploded. And it’s only going to get worse. Red_smart_factory

You may not be responsible for handling exotic material types anytime soon, but understanding the role that equipment connectivity standards can have at the earliest steps in a Smart Manufacturing process is useful nevertheless. Give us a call if you’d like to know more about how these technologies can benefit your operations. 

Schedule a Meeting

 

Topics: Semiconductor Industry, Smart Manufacturing/Industry 4.0, Giga Factory Minute

EDA Applications and Benefits for Smart Manufacturing Episode 6: Trace Data Analysis

Posted by Alan Weber: Vice President, New Product Innovations on Oct 25, 2018 11:20:00 AM

In this final article of the “EDA Application and Benefits” series we discuss an application that is one of the most basic and intuitive, but also provides the foundation for the many of the emerging capabilities in the machine learning and artificial intelligence (AI) domain—trace data analysis. Moreover, of all the applications we’ve introduced over the past 6 months, trace data analysis is the one that most directly leverages the capabilities of the SEMI Equipment Data Acquisition (EDA) standards

Problem Statement

When we ask fab process engineers and their supporting automation teams why they are now requiring the latest SEMI EDA/Interface A standards on their new equipment, the answer we hear most often is “To better understand equipment and process behavior.” And when asked why this cannot be achieved using the SECS/GEM interfaces, the answers are equally consistent: “The detailed information we need is either unavailable or cannot be collected at the frequencies we need to accurately see and characterize the behaviors we are interested in. And even if this were possible, we don’t have the operational freedom to change our data collection systems as quickly as our needs change, so we must have a more flexible approach.” 

What these engineers are looking for as a starting point is a way to easily specify a list of potentially related equipment parameters and collect their values at a rate that is fast enough to see how they are changing in relationship to one another. Human beings are wonderful at pattern recognition, and simply being able to juxtapose a set of signals on a “strip chart” display (see first figure below) can yield important insights into the underlying process. Of course, this capability is most useful when the engineer can precisely specify the timeframe of interest for this visual analysis. This is sometimes called data “framing” and can be accomplished by using equipment events to bracket the period of interest (see second figure below).

eda_apps_benefits6.1-1-1

EDA_apps_Benefits6.2-1

While humans may be good at pattern recognition, they quickly get overwhelmed when the number of parameters to view grows and/or the timespan to consider expands… which is where trace data analysis software enters the picture.

Solution Components

In addition to very flexible time-series data visualization tools, trace data analysis software packages must be able to “slice and dice” subsets of large data sets to compare every imaginable combination of equipment instance, process chamber, product, layer, recipe, fixture, consumable batch, shift, operator, … (you get the picture) to look for correlations between important factory metrics and the behavior of the equipment involved. Moreover, they must be able to identify and flag “abnormal” (which must be flexibly defined) situations for further analysis, since these may hold clues about incipient failures that traditional multivariate FDC (fault detection and classification) applications may not catch.

In fact, there is an emerging school of thought for fault detection that states “most of the time, the equipment is making good wafers, so unless there’s something very different about the tool behavior between the most recent lots and the current lot (as determined through trace data analysis), it’s very likely that the current lot is good as well.” This simplified approach has also been called “model-less FDC” because it mostly compares trace data signals rather than passing tool parameters into highly context-specific multivariate statistics-based models.

Of course, any trace data analysis application is only as good as the data that feeds it… which is where the EDA standards and the related equipment purchase specifications come into the picture.

EDA (Equipment Data Acquisition) Standards Leverage

Previous postings such as Episode 4 on Fault Detection and Classification and Precision Data Framing during Process Execution – Tricks of the Trade have highlighted the capabilities of the Freeze II EDA standards related to Data Collection Plans (DCPs) and the Trace Requests, Event Requests, and Exception Requests that comprise them. We have also highlighted the need for broad stakeholder involvement when creating the EDA section of an equipment purchase specification and described the process we’ve crafted to accomplish this.

However, to fully support a world-class trace data analysis application, it’s important to understand what to ask of the equipment suppliers. To this end, we’ve excerpted some key sample requirements from a typical purchase specification below.

  • Equipment Model Content (SEMI E120, E125, E164)

    • The hierarchical depth of the metadata model should include at least the “field replaceable unit” (FRU) level, and one of two levels below this for complex sub-systems.
    • The metadata model must contain command and status information for all equipment components that affect material movement. This includes not only material transfer elements such as robot arms, but also devices that may inhibit/enable material movement, such as gate valves, interlocks, etc.
    • The metadata model must include control parameters for all significant operating mechanisms and subsystems in the equipment. The control parameters may include but are not limited to: process variable setpoints and status values; control variable status values; PID tuning parameters, control limits, and calibration constants.
    • The metadata model must include whatever additional usage counters, timers, and other parameters that may be useful in time-based, usage-based, and condition-based maintenance scheduling algorithms.
    • The metadata model must contain parameters the describe consumption rates and levels for key process resources such as electricity, process gases, and other consumables. These are used in some of the FDC models to detect potentially abnormal process conditions.
    • Suppliers must provide a written description of the update rates, recommended sampling intervals, normal operating ranges and behaviors, and high/low/rate-of-change limits for all key process parameters.
    • Etc.

  • Data Collection Capability (SEMI E134)

    • Equipment must include built-in DCPs to support common equipment performance monitoring, diagnostic, and maintenance processes that are well known to the supplier. Documentation for these DCPs must define their purpose, activation conditions, interface bandwidth consumed, and the types of analysis the collected data enables.
    • Equipment parameters provided through the EDA interface must exhibit a number of data quality characteristics, including, but not limited to: an internal sampling/update rate sufficient to represent the underlying signal accurately; timing of trace reports that is consistent with the sampling interval within +/- 1.0%; values in adjacent trace reports must contain then-current values at the specified sampling interval; and rejection of obvious outliers.

  • Performance Requirements

    • Performance requirements will be expressed as combinations of sampling interval, # parameters per DCP, # of simultaneously active DCPs, group size, buffering interval, response time for ad hoc “one-shot” DCPs, maximum latency of event generation after the related equipment condition occurred, consistency of timestamps in trace reports with the specified sampling interval, and perhaps others.
    • Example: The EDA interface must be capable of reporting at least 5000 parameters at a sampling interval of 0.1 seconds (10Hz) with a Group Size of 1, for a total data collection capacity (bandwidth) of 50,000 parameters per second. It must also support simultaneous data collection from at least 5 clients while still achieving a total bandwidth of 50,000 parameters per second; Group Sizes greater than 1 may be used to achieve this level of performance.
    • Some equipment types may have more stringent performance requirements than others, depending on the criticality of timely and high-density data for the consuming applications.

apc2017_5KPIs Affected

Trace data analysis will undoubtedly take its place among the other “mission-critical” applications in today’s fabs because of the increasing process complexity and the need to maintain the traditional “time to yield” production ramp. This is especially true for the industry pioneers now using the latest EUV scanners, as there will be much to learn about this new technology in the coming years.

Let Us Hear from You!

EDA_apps_benefits_6

If you want to understand how the latest EDA standards and trace data analysis can support your future manufacturing objectives, or how to make this a reality in your Smart Manufacturing roadmap, please schedule a meeting!

Schedule a Meeting

Topics: EDA/Interface A, Smart Manufacturing/Industry 4.0, EDA in Smart Manufacturing Series

The Gigafab Minute and SEMI Standards: A Modern Miracle

Posted by Alan Weber: Vice President, New Product Innovations on Oct 4, 2018 11:04:00 AM

Gigafab minuteEven for someone who has been in this industry since the days of the TI Datamath 4-function calculator and the TMS1100 4-bit microcontroller (yes, that’s been a LONG time – the movie Grease premiered the same year!), it is sometimes hard to grasp the scope and complexity of what happens in today’s leading-edge semiconductor gigafabs. In fact, the only way to comprehend the enormous volume of transactions that occur is to consider what happens in a single minute – this is illustrated in the infographic we have labeled “The Gigafab Minute.”* 


It’s amazing enough to think that a single factory can start 100,000 wafers every month on their cyclical journey through 1500 process steps… and have 99%+ of them emerge 4 months later to be delivered to packaging houses and then on to waiting customers. It’s quite another to realize that all of this happens continuously (24 x 7) and automatically. TMS1100-TIDatamath-image

“How is this possible?” you ask.

Well, a big part of the solution is the body of SEMI standards which have evolved since the early 80s to keep pace with the ever-changing demands of the industry. From an automation standpoint, many of these standards deal with the communications between manufacturing equipment and the factory information and control systems that are essential for managing these complex, hyper-competitive global enterprises.

A significant characteristic of these standards is that they have been carefully designed to be “additive.” This means that new generations of SEMI’s communications standards do not supplant or obsolete the previous generations, but rather provide new capabilities in an incremental fashion. To appreciate the importance of this in actual practice, consider how the GEM, GEM300, and EDA/Interface A standards support the transactions that occur in a single Gigafab Minute. 

Starting at 1:00 o’clock on the infographic and moving clockwise, you first notice that 2.31 wafers enter the line. Of course, these are actually released in 25-wafer 300mm FOUPs (Front-Opening Unified Pod), but 100K wafers per month translates to 2.31 per minute. Since these factories run continuously, once the line is full, it stays full. And with an average total cycle time of 4 months, this means that there are 400K wafers of WIP (work in process) in the factory at any given time. This number, and the total number of equipment (5000+), drive the rest of the calculations. 

GEM (Generic Equipment Model) – SEMI E30, etc.

The GEM messaging standards were initially defined in the early 90s to support the factory scheduling and dispatching applications that decide what lots should go to what equipment, the automated material handling systems that deliver and pick-up material to/from the equipment accordingly, the recipe management systems that ensure each process step is executed properly, and the MES (Manufacturing Execution System) transactions that maintain the fidelity of the factory system’s “digital twin.” 

Every minute of every day, GEM messages support and chronicle the following activities: 240 process steps are completed (i.e., 240 25-wafer lots are processed), 300 recipes are downloaded along with a set of run-specific adjustable control parameters, and 600 FOUPs are moved from one place to another (equipment, stockers, under-track storage, etc.). For each of these activities, the factory’s MES is notified instantaneously.

GEM300 – SEMI E40, E87, E90, E94, E157

With the advent of 300mm manufacturing in the mid-to-late 90s, a global team of volunteer system engineers from the leading chip makers defined the GEM300 standards to support fully automated manufacturing operations. Starting at 5:00 o’clock on the infographic, the number of transactions per minute jumps almost 3 orders of magnitude, from the monitoring of 900 control jobs across 4000 process tools to the tracking of 360,000 individual recipe step change events. This level of event granularity is essential for the latest generation of FDC (Fault Detection and Classification) applications, because precise data framing is a key prerequisite for minimizing the false alarm rate while still preventing serious process excursions. In this context, more than 6000 recipe-, product- and chamber-specific fault models may be evaluated every minute.

Simultaneously, the applications that monitor instantaneous throughput to prevent “productivity excursions” and identify systemic “wait time waste” situations depend on detailed intra-tool wafer movement events. In a fab with hundreds of multi-chamber, single-wafer processes, 75,000 or more of these events occur every minute. gantt-chart-cycle-time

EDA (Equipment Data Acquisition) – SEMI E120, E125, E132, E134, E164, etc.

Rounding out the SEMI standards in our example gigafab is the suite of EDA standards which complement the command and control functions of GEM/GEM300 with flexible, high-performance, model-based data collection. The EDA standards enable the on-demand collection of the volume and variety of “big data” required from the equipment to support the advanced analysis, machine learning, and other AI (Artificial Intelligence) applications that are becoming increasingly prevalent in leading semiconductor manufacturers. As EUV (Extreme Ultraviolet) lithography moves from pilot production to high-volume manufacturing at the 7nm process node and beyond, the litho process area will become a major source of process data by itself, generating 10 GB of data every minute. This is in addition to the 100 GB of data collected from other process areas. graph-and-equipmentfolder

The End Result

The final wedge (12:00 o’clock) in our infographic highlights the real objective – which is producing the millions of integrated circuits that fuel our global economy and provide the technologies that are an integral part of our modern way of life. Assuming a nominal die size of 50 square mm (typical of an 8 GB DRAM), the 2.31 wafers we started at 1:00 o’clock result in almost 3200 individual chips. But none of this would be possible without the pervasive factory automation technology we now take for granted. So, as you finish reading this posting on whatever device you happen to be using, take a micro-moment to acknowledge and thank the hundreds of standards volunteers whose insights and efforts made this a reality!

Red_smart_factory-TWYou may not be responsible for running a gigafab anytime soon, but the SEMI standards used in this setting are no less applicable to any Smart Manufacturing environment. Give us a call if you’d like to know more about how these technologies can benefit your operations for many years to come. 

 

You can see this infographic and much more in the Cimetrix Resource center.

Resources

 *The Gigafab Minute was inspired by an analogous explication of the scope and impact of today’s Internet from Lori Lewis and Chadd Callahan of Cumulus Media, and published on the Visual Capitalist web site (http://www.visualcapitalist.com/internet-minute-2018/)

Topics: Industry Standards, SECS/GEM, Semiconductor Industry, Smart Manufacturing/Industry 4.0

EDA Applications and Benefits for Smart Manufacturing Episode 5: Fleet Matching and Management

Posted by Alan Weber: Vice President, New Product Innovations on Sep 5, 2018 10:30:00 AM

In the fourth article of this series, Fault Detection and Classification, we highlighted the application that has been the principal driver for the adoption of EDA (Equipment Data Acquisition) standards across the industry thus far, namely Fault Detection and Classification (FDC). In this posting, we’ll discuss another important application that effectively leverages the capabilities of the EDA standard: Fleet Matching and Management. 

Problem Statement

The problem that fleet matching (which also covers chamber and tool matching) addresses is maintaining large sets of similar equipment types at the same operating point in order to maximize lot scheduling flexibility by the real-time scheduling and dispatching systems that run modern wafer fabs. This avoids the situation where specific equipment instances are dedicated to (and therefore reserved for) critical layers of certain products, processes or recipes, which can reduce the effective capacity of the affected process area. This situation can arise because tools naturally “drift” apart over time, especially when manual adjustments are made to the equipment, or other factors (maintenance actions, consumable material changes, key sub-system replacements, etc.) affect the equipment’s operating envelope. eda5.1

Of course, part of the problem is choosing which equipment should be the one matched to—the so-called “golden tool.” And depending on the breadth of the fab’s product/process mix, there may be multiple targets to choose from, further complicating the task. 

Solution Components

The solutions for many of today’s complex manufacturing problems require lots of high-quality equipment data, and fleet matching is no exception. Like FDC, choosing the golden tool(s) also requires some information about which recent lots exhibited the highest yields, which must be correlated with the equipment used throughout the process. Unlike FDC, however, it is NOT necessary to build hundreds (if not thousands) of multivariate fault models specific to the various context combinations, because the underlying principle of chamber/tool/fleet matching is that “if all the fundamental operating mechanisms of a set of equipment are working consistently, then the behavior of the equipment in aggregate should likewise be consistent.” This means that the matching process can be largely recipe independent, which is a major simplification over other statistically based applications.

This is not as simple as it may first appear, because a complex equipment may have scores of these mechanisms (pressure/flow control, multi-zone temperature control, motion control, power/phase generation, etc.) for which thousands of parameters must be collected to characterize and monitor equipment behavior accurately. Static and dynamic equipment configuration information also comes into play, since similar (but not identical) tools may be interchangeable for certain processes. 

This is where the EDA standards enter the picture.

EDA Standards Leverage

Although not explicitly required by the SEMI EDA standards, the intent and expectation of its designers was to support a far richer (read “more detailed”) equipment metadata model than is practical in most SECS/GEM implementations. With respect to fleet matching and management, this would include not just the high-level status variables for key equipment mechanisms (listed above), but also the setpoints, internal control parameters, and detailed status of their underlying components. 

The metadata model must also include the complete set of equipment constants that govern tool operation, since these “constants” are sometimes changed “on the fly” by an operator within some allowable range. While this may be an acceptable production practice, it nevertheless affects the tool’s operating window, and must be accounted for in the matching algorithms.EDA5.2-667640-edited 

Moreover, the communications interface should support sampling and data collection of these detailed parameters at a frequency sufficient to observe the complete real-time operation of these mechanisms so the process and equipment engineers can more deeply understand how the equipment actually works. Support for this level of equipment visibility was also a stated requirement for the EDA standards.

Once this data is collected, a variety of analysis tools can look for similarities and anomalies in the equipment parameters to identify the factors that matter most in achieving consistent process performance. At this writing, a number of companies are looking at this domain as an ideal application for Artificial Intelligence and Machine Learning technology. Stay tuned for exciting developments in this area. 

KPIs Affected

The KPI (Key Performance Indicators) most impacted by the fleet matching and management application is overall factory cycle time, since the scheduling systems can make optimal use of all available equipment to move material through the fab.Accelerate gains, reduce costs

Equipment uptime is also improved, because the continuous equipment mechanism “fingerprinting” process which is fundamental to fleet matching also catches potential problems before they cause the entire tool to fail. Finally, when more equipment instances are available for running experimental lots (rather than having dedicated tools for this), the yield ramp for new processes can be shortened as well.

If keeping a large set of supposedly identical equipment operating consistently is a challenge you currently face, give us a call. We can help you understand the approaches for building a standards-based Smart Manufacturing data collection infrastructure to support the machine learning algorithms that are increasingly prevalent in this latest generation of manufacturing applications… including fleet matching and management. Smart Factory

To Learn more about the EDA/Interface A Standard for automation requirements, download the EDA/Interface A white paper today.

Download

 

Topics: EDA/Interface A, Smart Manufacturing/Industry 4.0, EDA in Smart Manufacturing Series

European Advanced Process Control and Manufacturing Conference XVIII: Retrospective and Takeaways

Posted by Alan Weber: Vice President, New Product Innovations on Jun 13, 2018 11:30:00 AM

apcm-2018-1Cimetrix participated in the recent European Advanced Process Control and Manufacturing (apc|m) Conference, along with over 160 control systems professionals across the European and global semiconductor manufacturing industry. The conference was held in Dresden, a beautiful city in the Saxony state of Germany which was the site of the original European conference in 2000 and host to this annual event many times since.

apcm-2018-2apcm-2018-3

This conference, now in its 18th year and organized by Silicon Saxony, is one of only a few global events dedicated to the domain of semiconductor process control and directly supporting technologies. The participants represented all links in the semiconductor manufacturing value chain, from universities and research institutes to component, subsystem, and equipment suppliers to software product and services providers to semiconductor IDMs and foundries across a wide spectrum of device types to industry trade organizations – something for everyone. 


As usual, the conference was very well organized, and featured a wide range of high-quality presentations, keynote addresses, and tutorial sessions. 

Highlights of the conference included the following:

  • apcm-2018-4“FDC to the power of 2 – how it got us to the next level of manufacturing excellence“ by Jan Räbiger of GLOBALFOUNDRIES – one of a number of long-time thought leaders in the development and application of APC technology, Jan described the latest phase of FDC system evolution, which includes broad use of the EDA/Interface A standards to zero in on recipe step-specific anomalies that had previously escaped detection.
  • “Applying the Tenets of Industrie 4.0 / Smart Manufacturing to Microelectronics Next Generation Analytics and Applications“ by James Moyne (University of Michigan / Applied Materials) – James presented a very nice decomposition of the domain into 6 topic areas (Big Data Environment, Advanced Analytics and Applications, Supply Chain Integration, CPS/IIoT, Cloud Computing, Digital Twin) and explained our industry’s relative status and recommended actions in each. one of the conclusions from his very disciplined treatment of the topic is that “Smart Manufacturing is essentially a connectivity problem” – and we couldn’t agree more!
  • “Lithography Control is Data Hungry” by Tom Hoogenboom of ASML – his illustration of just how precise litho metrology has become was brilliant: controlling exposure and registration at the 5nm node on a 300mm substrate is like moving your chair in the conference meeting room by 1 mm and having an airborne observer of a 300km diameter region know it happened!apcm-2018-5

Finally, as in many prior years, Cimetrix was privileged to present at this conference, as Alan Weber delivered a talk entitled “EDA Applications and Benefits for Smarter Manufacturing.” This presentation described the potential use of SEMI EDA (Equipment Data Acquisition) standards to improve the performance and benefit of a range of manufacturing applications; it also included a specific ROI case study for the use of EDA in the all-important FDC (Fault Detection and Classification) application to reduce the false alarm rate and the severity of process excursions. If you want to know more, you can request to view a copy of the entire presentation.

However, it wasn’t all work and no play… The local sponsors, GLOBALFOUNDRIES, Infineon, and XFAB, hosted the conference banquet at the picturesque Adam’s Gasthof in the nearby city of Moritzburg.

apcm-2018-6apcm-2018-7

In addition to all the food and libation one could possibly consume, the participants were feted with a torchlight walking tour of the town and its iconic Moritzburg Castle. All in all, German hospitality and history at its best.  

apcm-2018-8

The insights gained from these and the other 30+ presentations are too numerous to list here, but in aggregate, they provided an excellent reminder of how relevant semiconductor technology has become for our comfort, sustenance, safety, and overall quality of life.

This (apc|m) conference and its sister conference in the US are excellent venues to understand what manufacturers do with all the data they collect, so if this topic piques your interest, be sure to put these events on your calendar in the future. In the meantime, if you have questions about any of the above, or want to know how equipment connectivity and control fit into the overall Smart Manufacturing landscape, please contact us!

Topics: EDA/Interface A, Events

EDA Applications and Benefits for Smart Manufacturing Episode 4: Precision Fault Detection and Classification (FDC)

Posted by Alan Weber: Vice President, New Product Innovations on May 2, 2018 10:24:00 AM

In the third article of this EDA Applications and Benefits in Smart Manufacturing series, we highlighted the first of a series of manufacturing applications that leverage the capabilities of the EDA / Interface A suite of standards in leading semiconductor manufacturers. In this fourth article, we’ll highlight the application that has been the principal driver for the adoption of EDA across the industry thus far, namely Fault Detection and Classification (FDC).

Problem Statement

The problem that FDC addresses is the prevention of scrap that may result from processing material by an equipment that has drifted out of its acceptable operating window for whatever reason. The prevalent technique used by today’s leading FDC systems is to develop “reduced dimension” statistical fault models for the various production operating points based on training sets of “good” and “bad” runs. These models are then evaluated in real time with key parameters (usually trace data) collected from the equipment during processing to detect process deviation and predict impending tool failure. In the most advanced fabs, the FDC software is deeply integrated with the systems that manage process flow, and can even interdict equipment operation in mid-run to prevent/reduce scrap production. 

Of course, the challenge with this type of algorithm is developing models that are “tight” enough to catch all sources of potential faults (i.e., eliminating false negatives) while leaving enough wiggle room to minimize the number of false positives (also known as false alarms, or crying “Wolf!”). This in turn requires high quality data from the equipment, and lots of process engineering and statistical analysis expertise to develop and update the fault models for the range of production cases that must be handled. High-mix foundry environments exacerbate this situation.

Solution Components

The core of modern FDC systems is a robust multivariate statistics analysis toolbox, capable of handling large amounts of time series data. By “large,” we mean both the number of distinct equipment parameters and the number of samples for each parameter. These software tools collapse potentially hundreds of parameters into a small set of “principal components” that can be calculated on-the-fly using a limited set (say, 20-30) of equipment parameters. Some number of these principal components in aggregate represent the actual state of the process accurately enough to detect deviations from the norm, and since they can be realistically calculated in real time, the application serves as an on-line equipment health monitor.

EDAApplications4.3The other major solution component for a production FDC system is a fault model library management capability that can handle large numbers of models. This is necessary because the multivariate approach includes little or no awareness of the physical meaning of the principal components (i.e., they are not “first principles” based), so different operating points for the equipment must have their own sets of fault models. The proper models for a given operating point are selected by matching the values of the “context parameters” for a specific run to those used to store the models. Even if some models can be shared across a range of operating points, the number of distinct models for a foundry megafab will still number in the thousands.

EDA (Equipment Data Acquisition) Standards Leverage

In an advanced fab, there is a spectrum of data collection alternative available for a given application, from basic lot-level summary information to detailed real-time data that can used at the substrate level or even on a die/site basis. For FDC, this spectrum of possibilities is shown in the table below.

SEMI Standard Level

Functionality

Benefit

GEM/GEM300

Fault models difficult to change after initial development if data collection requirements change

Baseline

EDA Freeze I

(1105)

Easy to change equipment data collection plans as fault models evolve and require new data;

Model development environment can be separate from production system

Engineering labor reduction; improved fault models and lower false alarm rate

EDA Freeze II

(0710)

Use conditional triggers to precisely “frame” trace data while reducing overall data collection needs; Incorporate sub-fab component/subsystem data into fault models

Even better fault models; reduced MTTD (mean time to detect) of fault or process excursion; little or no data post-processing required

EDA Common Metadata (E164)

Include standard recipe step-level transition events for highly targeted trace data collection;

Automate initial equipment characterization process by using metadata model to generate required data collection plans

Faster tool characterization and fault model development time

Factory-Specific
EDA Requirements

Incorporate previously unavailable equipment signals in fault models;

Update data collection plans and fault models automatically after process and recipe changes;

Include recipe setpoints in the equipment metadata models

TBD (Not yet applicable)

 

The left column refers to the level of SEMI standards used to provide the necessary equipment data. The “Functionality” column describes how that data is used in an FDC context, and the “Benefit” column highlights the potential impact these functions can have. 

Let’s say that a fab implemented the capability described on rows 3 and 4 of the table (EDA Freeze II (0710) and E164-compliant EDA Common Metadata). In this case, the process equipment will be able to provide detailed process parameters at recipe step-specific sampling rates sufficient to evaluate “feature extraction” algorithms of even the most demanding FDC models…with context data to select the precise set of models for a given process condition. And even though the specific equipment parameters are necessarily process dependent, much of the software that monitors recipe execution events, generates the data collection plans (DCPs) that provide the trace data, and assembles the context data used by the model management library can be truly generic because of the fab-wide consistency of the equipment interfaces dictated by the E164 standards.

EDApplications4.1

Another aspect of the EDA standards that FDC teams can leverage is the system architecture flexibility enabled by the multi-client capability. Even while a piece of equipment is connected to a production data management infrastructure, the process engineers and statisticians who develop and refine the fault models can use an independent data collection system tailored for process behavior analysis, experimentation, and continuous improvement. When the new fault models are ready for production, the production DCPs can be updated with these new requirements.

KPIs Affected

Accelerate gains, reduce costs

FDC is considered a “mission-critical” application in today’s fabs because of the high cost of unscheduled equipment downtime and the importance of maintaining high product yield. Simply stated, “if FDC is down, the tool is down,” which means that the real-time data collection infrastructure supporting this application is likewise mission critical. As such, improvements in FDC performance can have a major impact on fab performance.

Specifically, FDC directly affects the process yield and scrap rate KPIs through higher fault detection sensitivity, and it affects equipment availability and related KPIs by reducing the number of false alarms that often require equipment to be taken out of production.

So what?

A wise colleague advised me early in my career to always have an answer for this question at the end of every presentation, article, or conversation. To answer this question in financial terms for this posting, let’s consider the cost of FDC false alarms for a production 300mm fab.

Assuming that 

  • an hour of tool time is worth US$2200, a qual wafer costs $250, and an hour of engineering/technician time costs $150, and 
  • it takes 5 hours of tool time, 2 hours of engineering time, and 6 qual wafers to resolve a false alarm, then 

each false alarm costs the company almost $12K. For a fab with 2000 pieces of equipment and an average false alarm rate of 2 per tool per year, that comes to an annual cost of almost $50M! A 50% reduction in the false alarm rate (which is not unreasonable) nets $25M of savings per year. 

Red_smart_factory

If this sounds like “real money” to you, give us a call. We can help you understand how to get on the Smart Manufacturing path with the kind of standards-based data collection infrastructure that is needed to support the latest generation of FDC systems and beyond.

 

To Learn more about the EDA/Interface A Standard for automation requirements, download the EDA/Interface A white paper today.

Download

 

Topics: EDA/Interface A, Smart Manufacturing/Industry 4.0, EDA in Smart Manufacturing Series

EDA Applications and Benefits for Smart Manufacturing Episode 3: Real-Time Throughput Monitoring

Posted by Alan Weber: Vice President, New Product Innovations on Mar 28, 2018 11:13:00 AM

In the introduction to this series (posted December 19, 2017), we listed some of the manufacturing stakeholders whose work objectives are directly addressed by the applications we’ll highlight in this and subsequent postings. In the second article, we explained the process used to map the careabouts of key stakeholder groups into specific EDA interface requirements which are can then be directly included in the purchasing specifications. semiconductor wafer

In this post, we’ll explain how some of those interface requirements support an important factory application that has general applicability across all equipment types, namely “real-time throughput monitoring.” This application can realistically work with a variety of equipment types with no custom code or configuration depending, of course, on how faithfully the equipment supplier implements the SEMI standards referenced in the requirements specification. This powerful concept greatly improves the software engineering productivity of a fab’s automation team, so we’ll take some time to explain how this is possible.

Problem Statement

This application addresses the problem of monitoring equipment throughput performance in real time, and raising an alarm when it drifts away from “normal” for any reason. This is especially important for bottleneck equipment (e.g., litho tracks and scanners), because any loss of throughput ripples throughout the line, resulting in lost production and its associated revenue and profit. Stated simply, “lost time on a bottleneck tool can never be recovered.” 

Solution Components

This application requires data that includes primarily the equipment events that chronicle the movement of substrates through the equipment and execution of the recipes appropriate for this equipment type (process, metrology, inspection, sorting, etc.). With this information, the application calculates the process time “on the fly” and compares the current value with the expected (“normal”) value. 

Models_4.pngSmart_Mfg_EDAappsandbenefits_ep3.3.png

This is not as simple as it first may seem, because the expected value will likely depend on the product type, process type, material status, layer, recipe, and several other factors. Taken together, the set of factors that determines “equivalence” of different lots for some processing purpose is called “context.” For this application, the context parameters ensure that you are comparing apples and apples when looking for variations in process time.

EDA (Equipment Data Acquisition) Standards Leverage

By “EDA,” we include not only the standards in the Freeze II / 0710 suite, but also SEMI E164 (EDA Common Metadata), E157 (Module Process Tracking), and by reference, the entire GEM 300 suite. This ensures not only the granularity and breadth of event support necessary to precisely track wafer movement and step-level recipe execution, but also specifies the naming conventions of those events and their associated parameters, regardless of equipment type or vendor.

If the equipment automation purchase specifications include clauses that state “we require that all state machines, states, state transition events, and attributes of the objects defined in the referenced 300mm SEMI standards be implemented and named exactly as specified in the standards,” then all the information you should need to write a truly generic throughput monitoring application will be available on demand.

A robust real-time throughput monitoring algorithm can be implemented with information solely from the following SEMI standards: E90 (Substrate Tracking), E157 (Module Process Tracking), E40 / E94 (Processing / Control Job Management), and E87 (Carrier Management). The Harel state diagrams, events of interest, and EDA metadata model representation* for a couple of these (E90 and E157) are shown in the figures below.

Models_1.png

Models_3.png

Note that as little or as much of the parameter information required to be available for each event (the rightmost picture in each figure) can be collected via the EDA construct of a “Data Collection Plan” (DCP) with one or more “Event Requests.” For more information about these capabilities, consult the SEMI E134 (Data Collection Management) specifications directly, or review some of the extensive educational material available on our web site.

The other point of leverage for the EDA standards is the multi-client capability. This contributes to the productivity and responsiveness of your automation software team members by allowing them to collect and process the data for this application independently from any other application. Specifically, the throughput monitoring functions can be implemented separately from whatever systems host the GEM command and control capabilities, which are usually managed very carefully because of their potentially negative impact on fab operations.

Key ROI Factors

accelerate gains, reduce costsAs we said in the initial post of this series, this application is not just something you could build and deploy with EDA-enabled equipment… in fact, this has already been done, and is delivering real production manufacturing benefit! Specifically, the ROI factors impacted (and benefit delivered) by this application include productivity excursion mean-time-to-detect (MTTD, 50% reduction), selected equipment throughput improvement (3-5%), and overall cycle time reduction (difficult to quantify precisely because of the staged implementation process). 

Of course, these results will vary depending on the manufacturer’s fab loading, operations strategy, and overall automation capabilities, but are representative for leading edge production wafer fabs running at near capacity. However, since these are very common ROI factors, most companies can easily quantify these improvements in real financial terms.

In Closing...

As always, your feedback is welcome, and we look forward to sharing the Smart Manufacturing journey with you.

EDA_apps_benefits_6.png

*The visualizations of equipment metadata model fragments are those produced by the Cimetrix ECCE Plus product (Equipment Client Connection Emulator).

Let us know if you would like to schedule a meeting to learn more:

Schedule a Meeting

Topics: EDA/Interface A, Smart Manufacturing/Industry 4.0, EDA in Smart Manufacturing Series