Industry News, Trends and Technology, and Standards Updates

Derek Lindsey: Product Manager

Derek Lindsey has been an employee of Cimetrix for over 22 years. For 21 of those years, Derek was a member of the product development team and was involved in the development of several of Cimetrix products. He also has extensive experience working with Cimetrix customers in implementing their tool control solutions using our products. He recently joined the product management team to help add some technical expertise to product development. Derek has a bachelor’s in computer science from Brigham Young University.
Find me on:

Recent Posts

EDA Best Practices Series: Choose to Provide E164-Compliant Models

Posted by Derek Lindsey: Product Manager on Aug 28, 2019 11:42:00 AM

In the EDA Best Practices blog series, we have discussed choosing a commercial software platform, using that package to differentiate your data collection capabilities and how to choose what types of data to publish. In this post we will review why you should choose to provide an E164-compliant equipment model.

What is E164?

Equipment Data Acquisition (EDA) - also referred to as Interface A - offers semiconductor manufacturers the ability to collect a significant amount of data that is crucial to the manufacturing process. This data is represented on the equipment as a model, which is communicated to EDA clients as metadata sets. The metadata, based upon the SEMI E125 Specification for Equipment Self-Description, includes the equipment components, events, and exceptions, along with all the available data parameters.

Since the advent of the SEMI EDA standards, developers and fabs have recognized that equipment models, and the resulting metadata sets, can vary greatly. It is possible to create vastly different models for similar pieces of equipment and have both models be compliant with the EDA standards. This makes it difficult for the factories to know where to find the data they are interested in from one type of equipment to another.

Recognizing this issue, the early adopters of the EDA standards launched an initiative in to make the transition to EDA easier and ensure consistency of equipment models and metadata from equipment to equipment. This effort resulted in the E164 EDA Common Metadata standard, approved in July 2012. Another part of this initiative was the development of the Metadata Conformance Analyzer (MCA), which is a utility that tests conformance to this standard. With this specification, equipment modeling is more clearly defined and provides more consistent models between equipment suppliers. This makes it easier for EDA/Interface A users to navigate models and find the data they need.

Power of E164

The E164 standard requires strict name enforcement for events called out in the GEM300 SEMI standards. It also requires that all state machines contain all of the transitions and in the right order as those called out in the GEM300 standards. This includes state machines in E90 for substrate locations and in E157 for process management. The states and transition names in these state machines must match the names specified in the GEM300 standards.

These requirements may seem unnecessarily strict, but implementing the common metadata standard results in:

  • Consistent implementations of GEM300
  • Commonality across equipment types
  • Automation of many data collection processes
  • Less work to interpret collected data
  • Ability for true “plug and play” applications
  • Major increases in application software engineering efficiency

Knowing that a model is E164 compliant allows EDA client applications to easily and programmatically define data collection plans knowing that the compliant models must provide all of the specified data with the specified names. For example, the following application is able to track carrier arrival and slotmap information as well as movement of material through a piece of equipment and process data for that equipment.eda-best-practice-e164-1

This application will work for any GEM300 equipment that is E164 compliant. The client application developer can confidently create data collection plans for these state machines, knowing that an E164-compliant model must provide the needed state machines and data with the proscribed names.

Decide to be E164 compliant

A number of leading semiconductor manufacturers around the globe have seen the power of requiring their equipment suppliers to provide EDA/E164 on their equipment, and now require it in their purchase specifications.

If you are a semiconductor manufacturer, you should seriously consider doing the same because it will greatly simplify data collection from the equipment (and most of your candidate suppliers probably have an implementation available or underway.

If you are an equipment supplier and your factory customers have not required that your EDA models be E164 compliant, you should still seriously consider providing this capability anyway as a way to differentiate your equipment. Moveover, E164-compliant models are fully compliant with all other EDA standards. Finally, it is much easier and more cost effective to create E164-compliant models from the outset than it is to create non-compliant models and then convert to E164 when the factory requires it.

Conclusion

The purpose of the E164 specification is to encourage companies developing EDA/Interface A connections to implement a more common representation of equipment metadata. By following the E164 standard, equipment suppliers and factories can establish greater consistency from equipment to equipment and from factory to factory. That consistency will make it easier and faster for equipment suppliers to provide a consistent EDA interface, and for factories to develop EDA client applications.

Contact Us

Topics: Industry Standards, EDA/Interface A, Doing Business with Cimetrix, Smart Manufacturing/Industry 4.0, Cimetrix Products, EDA Best Practices

EDA Implementation Insights: What Data Should I Publish?

Posted by Derek Lindsey: Product Manager on Mar 19, 2019 11:30:00 AM

Previous blog posts have discussed the merits of choosing a commercial software platform for implementing the equipment side of EDA (Equipment Data Acquisition) and how you would use that package to differentiate your equipment data collection capabilities from your competitors.

In this post, we discuss how to design the equipment model to contain enough information to make it useful without publishing so much data that it becomes cumbersome for your factory customers to find the data that is most important to them.

Data to Publish

The automation requirements for the most advanced fabs call for the latest versions (Freeze II) of all the standards in the EDA suite, including the EDA Common Metadata (SEMI E164) standard. In addition to providing an excellent foundation for a new equipment model, E164 enables consistent implementation of GEM300, commonality across equipment types, automation of many data collection processes, less work to interpret collected data, and true plug-and-play client applications—all of which contribute to major increases in engineering efficiency. These capabilities benefit both the equipment suppliers and their factory customers alike. Therefore, equipment models should make all E164-compliant data available.

To summarize, those who remember the complexity of implementing SECS-II before GEM came along (pre-1992) will understand this analogy: E164 is to EDA what GEM was to SECS-II.

  • Fab-specified Data

The second blog post made the following statement:

“In effect, the metadata model IS the data collection 'contract' between the equipment supplier and the fab customer."

“This is why the most advanced fabs have been far more explicit in their automation purchase specifications with respect to equipment model content, going so far as to specify the level of detailed information they want to collect about process performance, equipment behavior, internal control parameters, setpoints and real-time response of common mechanisms.”

You only have to read the latest requirements specs for these fabs to get more specifics. Pick the one from your customer base that sets the bar highest and let that be your target.

Data to Avoid in the Model

It is easy to fall into the mindset that if publishing some data through the EDA interface is desirable, the more data we can publish, the better. This is not always the case. In his fascinating book, The Paradox of Choice, Barry Schwartz makes the case that freedom is defined by one’s ability to choose, but more choice doesn’t mean more freedom. In fact, too many choices actually cripple one’s ability to choose. The same can be said of data published in an EDA interface. Making too much data available actually hinders the creation of EDA client applications.information-overload-1-1

We were recently working with a fab to perform a proof-of-concept where we connected an EDA client to a piece of equipment with an EDA interface. We were able to connect to the equipment in a matter of minutes, but finding suitable data to collect for our proof-of-concept took almost an hour because there was so much superfluous data published from the equipment.

Publishing everything including the kitchen sink reduces the ability to create an efficient EDA client application.

Some examples of data to avoid publishing in the model include:

  • Parameters that have no value – If a parameter is available in the model, but the value is not published by the equipment control application, that parameter is just extra noise in the interface. Consider not adding it to the model.
  • Parameters with values that do not change – If a parameter value does not change during the life of the application, it does not make sense to collect that parameter’s data. For example, if an application uses an equipment constant, it may not be necessary to publish that constant through the EDA model.
  • Irrelevant data – If a parameter contains data that is irrelevant to data publication, it should not be added to the model. For example, having parameters in the model that contain the IP address or port number for connection are not very useful in the equipment model. This information is necessary in connecting with an EDA client, but is not relevant for data collection in the model.

The takeaway: Publish data required by E164 and additional fab-specified data, but carefully evaluate other data to be published to make sure it is relevant and useful for data collection.

If you have questions about Equipment Data Acquisition or would like a demo of the functionality described above, please contact Cimetrix to schedule a discussion

You can download an introduction to EDA White Paper any time.

Read the White Paper

Topics: Industry Standards, EDA/Interface A, Smart Manufacturing/Industry 4.0

SECS/GEM series: Equipment Terminal Services

Posted by Derek Lindsey: Product Manager on Apr 19, 2018 10:27:00 AM

After several articles in the series discussing data collection, events, alarms, recipe management and documentation, this post focuses on the Twitter of the GEM standard – Equipment Terminal Services. We will examine what terminal services are, why they are needed and the mechanics of how they work.

What are Terminal Services?

Equipment Terminal Services allows the factory operators to exchange information with the host from their equipment workstations. The host can display information on the equipment’s display device. It also allows the operator of the equipment to send information to the host. The equipment must be capable of displaying information passed to it by the host for the operator’s attention. 

Why Do You Need This Feature?

An example of when terminal services might be used is as follows:

  1. The host gets notified by the FDC software that the process module had an excursion that needs to be addressed.
  2. The host turns on an operator notification light on the light tower. The notification light needs to be accompanied by a reason that the light was illuminated.
  3. The host sends a terminal message saying that the FDC software detected an excursion and that the operator should address the issue.
  4. Along with the signal tower light, the terminal services notification is active on the tool.
  5. The operator sees and acknowledges the message.
  6. Optional: There are different ways to recover, but the operator could send a terminal message to the host after the issue is resolved.

secsgem-terminalservices-1

How Does Terminal Services Functionality Work?

When the host sends a terminal message to the equipment, the equipment is required to display the message to the operator. The display must be able to show up to 160 characters (even more than can be sent in a single tweet using Twitter) but may display more than that. The equipment’s display device must have a mechanism for notifying the operator that a message was received and not yet recognized by the operator. The message continues to be displayed until the operator recognizes the message. The equipment must provide a method, such as a push button, for the operator to acknowledge the message. Message recognition by the operator results in a collection event that informs the host that the operator has received the information. The equipment application is not required to interpret the data sent from the host. It is solely information meant for the operator.

If the host sends a new message is sent before the operator acknowledges a previous message, the new message overwrites the previous message.

The host may clear unrecognized messages (including the indicator) by sending a zero-length message. The zero-length message is not considered an unrecognized message.

The equipment must also allow the operator to send information entered from the operator’s equipment console to the host. 

Which messages are used?

Message ID Direction Description
S10F3 H->E Host sends textual information to equipment for display to the operator on a terminal
S10F1 H<-E Operator sends text message to host
S10F5 H->E (Optional) Host sends multi-block display message. If multi-block is not supported, the equipment responds with an S10F7 message that multi-block is not allowed.
S6F11 H<-E Equipment sends collection event to the host notifying the host that it has recognized the message

 

Click here to read the other articles in our SECS/GEM Features and Benefits series. 

To download a white paper on an introduction to SECS/GEM, Click below:

SECS/GEM White Paper

Topics: SECS/GEM, SECS/GEM Features & Benefits Series

Equipment Control Logging Benefits

Posted by Derek Lindsey: Product Manager on Mar 8, 2018 11:02:00 AM

markets-timber-logging.jpg

Equipment control applications are highly complex and have many moving parts that require a high level of coordination. Because of the high degree of difficulty, problems are bound to crop up. Sometimes the problems are related to a hardware issue. Sometimes the problems are caused by operator error. Sometimes problems are timing related. Sometimes problems happen infrequently. Regardless of the frequency or the cause of the errors, how do you go about debugging issues that happen in the field if you are unable to attach a debugger to the application?
 
The answer is logging.

As part of the CIMControlFramework (CCF) product for creating equipment control applications, Cimetrix developed a logging package. Our logging package has two parts – collecting the log messages and analysis of the messages.

The logging package allows you to assign a source and a type for each log message. The source specifies where the log message originated. The type is a category that can be used to route the log 

messages to specific output locations called log sinks. We have found the most useful log sink to be a text-based log file. The logging package can be configured for the types of messages to log. It can also be configured for how long to keep log files and how many to keep. This helps keep hard drives from getting too full.

logging.bmp

The temptation for many users is to enable all log messages while developing the equipment control application and then turn all the logging off when the equipment ships to the factory. Cimetrix recommends leaving as much logging enabled as possible. This will help you avoid trips to the fab when a problem arises that can be solved via the logging package. Some clients worry about resource usage by the logging package. We have found that the impact of the logging package is light enough that it is advantageous to leave it on all the time.

The Cimetrix logging package was such a success in CCF, that we have started using the logging package in all Cimetrix products. The logging package has earned rave reviews from Cimetrix product users. Here are a few quick examples that show how valuable logging is:

1. An OEM customer called in a panic because because an end user was withholding payment due to a timing/throughput issue in the application. Together Cimetrix and the OEM reviewed the log file. Using some of the LogViewer analysis tools we were able to isolate and identify the problem within 30 minutes. The OEM was able to confidently tell the end user that they had found the problem and a fix would be available within the next software release. Because the OEM was able to support them so quickly remotely, the end user had confidence in the OEM and released the payment.

2. At Cimetrix, we often hear, “This only happened once, but…” With logging always enabled, it is possible to diagnose problems after the fact. This is especially important for problems which occur infrequently. Users of the Cimetrix logging package are able to resolve issues that happen only rarely.

3. Occasionally an equipment control application will deadlock – two different modules are waiting on each other and neither is free to proceed. Using the LogViewer’s Callstacks plug-in, in conjunction with the Timing Chart plug-in, make the process of diagnosing the deadlock much easier.

logging-1.png

4. An end user called up their OEM equipment provider because the software stopped unexpectedly. They wanted to OEM to put someone on a plane immediately to come diagnose the problem. The OEM was able to view the log file to see that an operator had stopped the tool without the supervisor realizing it. When asked, the operator confirmed he had stopped the tool. Crisis averted. No plane ride required by the OEM to satisfy their customer!

5. A client came to Cimetrix for a training class. This client brought in a contractor to attend the class as well. Part of the Cimetrix training was used to review the logging package. During a break in the training, the contractor approached the instructor and asked if he could purchase the logging package separately for use in his other contracts because he could see several applications that would benefit from the power of the logging package.

6. Cimetrix is continuing to add useful plug-ins to the LogViewer. We recently added an E84 (automated material handling system) plug-in to assist in implementing and debugging material transfer. LogViewer allows users to implement their own custom plug-ins for analyzing data important to them.

logging-2.png

These are just some of the success stories we have heard about in relation to the logging package. With equipment control applications and factory automation, there will always be issues to be addressed and opportunities to root cause unexpected behavior. Having a powerful logging package makes that process much easier.

 

Topics: Equipment Control-Software Products, Customer Support, Cimetrix Products

SECS/GEM series: Data Polling

Posted by Derek Lindsey: Product Manager on Jan 24, 2018 11:30:00 AM

GEM is an industry standard, which defines standard methods to communicate between process equipment and factory host software for monitoring and controlling purposes. By connecting GEM equipment, factories can immediately experience operational benefits. Factory hosts can collect data in multiple ways. A previous blog post discussed collecting data by using collection event reports where data is pushed to the host based state transitions performed by the equipment. In addition to event reports, the factory host often has a need to poll the equipment for current data values. Data values can be directly requested by the host, or can be sampled on a periodic basis in a trace report. This is called Data Polling and is the topic for today's discussion.

datapolling_281485322-.jpgTypes of Data

There are three types of data in a GEM interface:

  • Data Variable (DV) – data items that can be gathered when an equipment event occurs. This data is only guaranteed to be valid in the context of the event. For example, the GEM interface may provide an event called PPChanged (triggered when a recipe changes). The interface may also provide a data variable called changed recipe. The value of this DV is only valid in the context of the PPChanged event. Polling the value at a different time may have invalid or unexpected data.
  • Status Variable (SV) – data items that contain information about the equipment. This data is guaranteed to be valid at any time. For example, the equipment may have a temperature sensor in a process module. The GEM interface may provide a ModuleTemperature status variable. The host can request the value of this SV at any time and expect the value to be accurate.
  • Equipment Constant (EC) – data items that contain equipment settings. Equipment Constants determine how equipment will behave. For example, a GEM interface may have an equipment constant called MaxSimultaneousTraces which specifies the maximum number of traces that can be requested simultaneously from the host. The value of equipment constants is always guaranteed to be valid and up to date.

Data Properties

Each of the three data types listed above have similar properties that help define the data. The equipment supplier is responsible for providing these properties in a GEM manual so that the factory host will be able to interact with the data. Some of the important data properties are:

  • ID – a numeric ID that must be unique in the GEM interface. These IDs can be grouped by data type and are referred to as SVIDs (Status Variable IDs), DVIDs (Data Variable IDs) and ECIDs (Collection Event IDs).
  • Name – a human-readable name for the data item
  • Format – the data type of the item. 
    • Data formats can be simple (numeric, ASCII, Boolean) or complex (arrays, lists, structures). For example, numeric types can be I1, I2, I4, I8 (signed integer types of different byte length), U1, U2, U4, U8 (unsigned integer types) and F4 or F8 (floating point types). 
    • List and array types contain multiple values in the data item. For example, image data would be formatted as a byte array. 
    • Structure types contain a specific type of data. For example, a variable may represent a slot map which contains carrier information as well as a list of slots and their wafer placement status.
  • Value – the actual value of the data item. Data values are in an accurate, efficient, self-describing binary format so that the host will know how to interpret the data. The data format allows for collection of more data more efficiently.

Collection Events (CE) and Alarms also have IDs and names. These items are discussed in other blog posts, but it is important to know about some of the properties for this post because these properties can be queried as well.

Data Polling

As mentioned, the factory host will often get data on a regular basis through trace reports and event reports that it defines. GEM also provides a method for the factory host to poll data based on its needs.

Status Variables

The host can query the value of status variables at any time by sending an S1F3 message containing a list of SVIDs for which to obtain the value. If the list has a length of one, only the value of the single SV will be returned. If the list has a length of zero, the values of all SVs defined in the interface will be returned. The values are returned in a list in an S1F4 message from the equipment.

The host can also request a list of SV names from the equipment by sending an S1F11 message to the equipment. The list rules mentioned above apply to this message as well. The return message will contain an entry for each SV that displays its SVID, Name and Unit.

Equipment Constants

Similar to the way SVs work, the host can also query the values of equipment constants defined in the GEM interface by sending an S2F13 message. The values will be returned from the equipment using an S2F14 message.

Also similar to SVs, the names of ECs can be queried using an S2F29 message.

Data Variables

Since data variables are only valid in the context of a collection event, there is not a method for polling values of data variables. The value of a Data Variable can only be reported in a collection event report.

Other

In addition to the methods for polling data discussed above, the following items can also be obtained from a GEM host by polling the equipment:

  • Collection Events (CE) – The host can query what Collection Events are available on the GEM interface along with what DVs are associated with each CE. These are requested using the S1F23 message.
  • Alarms – The host can query what Alarms are available on the system by sending an S5F5 message listing the ALIDs of the desired alarms. The return message lists the alarm code and alarm text associated with the ALID. Two status variables are required to be present in every GEM interface. AlarmsEnabled contains a list of IDs of all enabled alarms on the equipment. AlarmsSet contains the list of ID for alarms on the equipment that are currently in the Set state. Since these values are status variables, they can be queried at any time.
  • MDLN and SOFTREV – The response to an S1F1 (Are you there?) message will contain the equipment model type (MDLN) and software revision (SOFTREV) for the equipment.
  • DateTime – The date and time for the equipment can be requested using an S2F17 message. The host can synchronize the equipment’s time using the S2F31 message. GEM requires the equipment to maintain a Clock SV containing the current time. Allowing the host to query and synchronize time provides the capability to order nearly simultaneous events on the system.

Trace Data Collection

Trace data collection provides a method of sampling data on a periodic basis. The time-based approach to data collection is useful in tracking trends or repeated applications within a time window, or monitoring of continuous data.

When creating a trace definition, the host provides the following:

  • Sample period – the time between samples. The resolution is in centiseconds, so it is possible to gather data very quickly using a trace. It is common for equipment so support as fast as a 10 Hz trace interval.
  • Group size – number of samples included in a trace report
  • SVIDs – List of status data to be included in the trace
  • Total samples – number of samples to be taken over the life of the trace
  • Trace request id – identifier of the trace request (GEM only allows trace IDs of type integer)

The host defines a trace request by using the S2F23 message. Trace reports are sent from the equipment to the host using the S6F1 message.

Trace Sample

Let’s suppose that a piece of equipment is processing a wafer and the processing takes 5 minutes. It is important to keep the chuck temperature within a certain acceptable range and to make sure that the chamber pressure stays below a specified level. It is sufficient to monitor the values at 15 second intervals, but we can create groups of data to only receive reports once a minute. The host could send an S2F23 message with the following trace configuration:

Trace ID: 100 (ID must be an integer)
Sample Period: 00001500 (take a sample every 15 seconds)
Total Samples: 75 (Samples every 15 seconds for 5 minutes)
Group Size: 4
SVID List:
   300 (ID of the status variable that contains information about chuck temperature)
   301 (ID of the status variable that contains information about chamber pressure)

After one minute, the first trace report will be delivered using an S6F1 message from the equipment. The message will contain the following information:

100 (Trace ID)
4 (last sample number)
2018-01-22T14:20:34.8 (date format depending on TimeFormat equipment constant)
Status Value List: (Length is 8: 2 SVs with a group size of 4)
   219.96 (chuck temperature for first sample)
    0.0112 (pressure for first sample)
   219.97 (chuck temperature for second sample)
   0.0122 (pressure for second sample)
   219.97 (chuck temperature for third sample)
   0.0120 (pressure for third sample)
   219.96 (chuck temperature for fourth sample)
   0.0119 (pressure for fourth sample)

After another minute the trace report may look like the following:

100 (Trace ID)
8 (last sample number)
2018-01-22T14:21:34.8 (date time shows one minute later than the first trace)
Status Value List: (Length is 8: 2 SVs with a group size of 4)
   219.96 (chuck temperature for fifth sample)
   0.0112 (pressure for fifth sample)
   219.97
   0.0122
   220.01
   0.0120
   220.00
   0.0119

Three more reports will be received at one-minute intervals. The host can check returned values in the report and react accordingly if values are out of the expected range.

Conclusion

The host could poll status data using S1F3 if it wanted to check a value at a given point in time. It can set up a trace if it wants to continuously collect data over a given period of time.

Using the data sampling methods outlined in this blog will allow host applications to poll the data they need when they need it. GEM provides flexibility in requesting data from the equipment either by allowing the host to query values at a given point in time or to be sampled on a periodic basis using a trace.

Click here to read the other articles in our SECS/GEM Features and Benefits series. 

To download a white paper on an introduction to SECS/GEM, Click below:

SECS/GEM White Paper

Topics: Industry Standards, SECS/GEM, Smart Manufacturing/Industry 4.0, SECS/GEM Features & Benefits Series

Sending data in chunks to optimize network performance

Posted by Derek Lindsey: Product Manager on Oct 19, 2017 10:11:00 AM

The Interface A / EDA standards define powerful methods for collecting data from an equipment control application. The data collection can be as simple as querying values of a parameter or two, or as complex as gathering thousands of parameter values across multiple reports. EDA specifies the use of internet standard messaging protocols like HTTP, XML and SOAP messages for collecting this data.

It is possible to define so many data collection plans gathering so much data that the sheer amount of data causes network performance to degrade. To remedy this situation, the EDA standards provide ways of sending the data in “chunks,” which dramatically improves the performance of XML over HTTP.

Two methods for sending data in chunks are grouping and buffering.

Grouping

Grouping only applies to individual Trace Requests within a Data Collection Plan (DCP).  If I have a trace request with an interval of one second, a group size of 1 would generate a report every second and send it across the wire. If I change my group size to 10, a report would still be generated every second, but the report would not be sent across the wire until 10 reports have accumulated. Each report has its own timestamp and they are arranged in the order they occur. The following diagram shows a trace data collection report with a group size of 3.

EDAdatachungs-1.pngBuffering

Buffering is different from grouping in that the buffer interval (in minutes) applies to an entire DCP rather than individual trace requests within that plan. For example, if I have a DCP with three trace requests and two event requests defined with a buffer interval of 1 (meaning one minute), the trace reports would still be generated at the specified trace interval. Event reports would be generated as the events are triggered. The reports are not sent to the EDA client until the buffer interval expires. At that point, all the data reports that were generated within that buffer interval time are packaged and sent to the EDA client.EDADatachunks-2.png

Combining Grouping and Buffering

Grouping and buffering can be combined as well. Groups are still defined on a per trace basis. If a group size has not been met when the buffer interval expires, the group report will be in the next buffer report that is sent.

Summary

With the provision for transmitting data in blocks, using grouping for trace data reports and/or buffering for all data reports, EDA is well suited for collecting large amounts of data without having a negative effect on network performance.  

Topics: EDA/Interface A, Data Collection/Management

Implementing CIMPortal Plus

Posted by Derek Lindsey: Product Manager on Jul 7, 2017 12:07:00 PM

Toolkit.jpg

Generally, when I do a DIY (do-it-yourself) project around the house, I spend the majority of the time searching for my tools. The other day I was helping a friend with a project. He had a well-organized tool box and it seemed that the perfect tool was always at his fingertips. I was amazed at how fast the project went and how easy it was when the right tools were handy.

In April of 2016, we published a blog called OEM EDA Implementation Best Practices that outlined ten things to consider when designing an equipment-side EDA / Interface A solution to fit your needs. This blog post analyzes a few of those recommendations and looks at how using the Cimetrix EDA products CIMPortal Plus, ECCE Plus and EDATester (a well-stocked and organized tool box) makes it very easy to follow those recommendations.

The basic steps in creating a useful EDA implementation are:

  1. Determine which data will be published
  2. Build an equipment model
  3. Deploy the model
  4. Publish the data from the equipment control application
  5. Set up a data collection application
  6. Test the interface

The blog post mentioned above states, “Since the content of the equipment metadata model is effectively the data collection contract between the equipment supplier and the factory users, your customer’s ultimate satisfaction with the EDA interface depends on the content and structure of this model.” Before building your model, you need to determine what data the equipment will make available for collection. CIMPortal Plus has the concept of a Data Collection Interface Module (DCIM) that publishes this data to the EDA server. The engineer building the model will map the data from the DCIM into the equipment model.

Once the mapping of the data is complete, the engineer will need to put this data in a format understood by the server. CIMPortal Plus provides a utility called Equipment Model Developer (EMDeveloper – pictured below) that makes it easy to create the hierarchy of your equipment (SEMI E120) and embed the data from the DCIM into that model (SEMI E125). If you use the tools and best practices provided in EMDeveloper, your equipment model will conform to the SEMI E164 (EDA Common Metadata) standard as well. This can be very useful when writing data collection applications so conformance to E164 is being required by more and more fabs. The E164 standard was developed to encourage companies using Interface A connections to provide a more common representation of equipment metadata based upon the SEMI E125 Specification for Equipment Self-Description. This makes data collection more uniform across these pieces of equipment.

CIMPortalPlus_Blogimage1.png

Once the model is created and validated, it is deployed to the CIMPortal Plus server. The server is the component that manages and tracks all data collection plans, reports, tasks, access control and timing. 

With the DCIM information embedded in the model (described above), it is easy for the equipment control application to push the data to be published to the EDA server for collection. This is done by using a simple API available on the DCIM interface.

In addition to CIMPortal Plus server capabilities, Cimetrix has other products available to help with client-side data collection. ECCE Plus is an industry approved method for manually testing EDA implementations. For users who need to create client-side data collection applications, Cimetrix also provides EDAConnect - a powerful library that handles all the connection details and allows developers to concentrate on the specific data collection and analysis tasks.

Fabs receive a wide variety of equipment with EDA implementations from numerous vendors. They want to use a single verification application to make sure that all EDA implementations are compliant to the EDA standards. That’s where EDATester comes in. EDATester is a new product that allows users to quickly and accurately verify EDA standards compliance by automating the test procedures ISMI EDA Evaluation Method that were defined specifically for this purpose. If you use Cimetrix products to implement your EDA interface, you are guaranteed to be compliant with the SEMI EDA standards. But whether you use Cimetrix products to implement your EDA interface or not, you (and your fab customer) want to rest assured that your implementation is fully compliant. Moreover, you’ll want to know that you’ve met the fab’s performance criteria for your equipment interface. To support this use case, the EDATester also allows users to quickly profile the performance of EDA data collection on a piece of equipment so that fabs and those using the data will know the boundaries within which they can successfully collect equipment data.

With the well-stocked EDA tool box provided by Cimetrix, following the EDA best practices in creating an efficient, standards-compliant EDA interface becomes a snap.

Topics: EDA/Interface A, Cimetrix Products

CCF Series Wrap-up

Posted by Derek Lindsey: Product Manager on Apr 12, 2017 11:00:00 AM

One of the habits outlined in Stephen R. Covey's book, The 7 Habits of Highly Effective People, is to "Begin with the End in Mind." He goes on to explain that beginning with the end in mind means to "begin each day, task, or project with a clear vision of your desired direction and destination, and then continue by flexing your proactive muscles to make things happen.”

Beginning an equipment control project with a clear vision of your desired destination makes it much more likely that you will have a successful project. A blog post titled CIMControlFramework Work Breakdown dated March 15, 2016 outlined the tasks necessary to create a first-class equipment control application using CIMControlFramework (CCF). Since that initial blog post, Cimetrix has explored each of the tasks labeled in the work breakdown structure in greater depth in their own blog posts as follows:

Looking back from the successful completion of a CCF equipment control application makes it clear that the work breakdown vision from the beginning helped gain that success.

You can also reference the following blog posts related to CimControlFramework:

CIMControlFramework Dynamic Model Creation

Learning from Others

Build vs. Buy

WCF and CIMControlFramework

To learn more about CCF, visit the CIMControlFramework page on our website!

Topics: Equipment Control-Software Products, Cimetrix Products

Storing Data in a CCF application

Posted by Derek Lindsey: Product Manager on Mar 8, 2017 1:00:00 PM

In Sir Arthur Conon Doyle’s A Scandal in Bohemia, Sherlock Holmes tells Watson, “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.”

In a March 2016 blog post on CCF work breakdown Cimetrix listed eleven points to be taken into consideration when starting an equipment control application using CIMControlFramework (CCF). One of the tasks in the work breakdown is to determine what kind of data collection and storage is to be used in your CCF application and determine how that data is to be stored.

User_Interface_Sm_CCF_1-5-17.jpg

CCF provides several mechanisms for collecting and storing data. These include:

  • History Objects

  • Full GEM Interface

  • Full EDA/Interface A Interface

  • Centralized DataServer

The remainder of this blog post will look at each of these items in more detail.

History Objects

In early iterations of CCF, users noticed when using logging, there were certain messages that they wanted to be able to query without the overhead of having to search all log messages. To help accommodate this need, History objects were introduced. Some examples of these objects in CCF are EPT History, Wafer History and Alarm History. When an important event happens in the life of a history object, a log message is written to a database table (configured during CCF installation) that corresponds to that type of object. That database table can be queried for the specific historical information for only that type of data. 

Full GEM/GEM 300 Interface

As described in a CCF blog post from February 15, 2017, CCF comes standard with a fully implemented GEM and GEM 300 interface. The GEM standards allow users to set up trace and event reports for the collection of GEM data. No additional programming is required by the application developer to have access to the GEM data collection.

Full EDA/Interface A Interface

The same blog post of February 15th also states that CCF comes standard with a fully implemented Freeze II and E164 compliant EDA interface. EDA can be used to set up data collection plans based on Events, Exceptions and Traces. With the E157 standard and conditional trace triggers, EDA makes it easy to zero in on the data you want without having to collect all data and then sift through it later.

Centralized DataServer

In order to create, initialize, populate and pass data, CCF uses a centralized DataServer object. The DataServer is responsible for creating the dynamic EDA equipment modelas well as populating CIMConnect with Status Variables, Data Variables, Collection Events and Alarms. All this is done at tool startup so that the data available exactly matches the tool that is in use.

Data is routed to the DataServer which then updates the appropriate client – such as EDA, GEM or the Operator Interface. An equipment control application can register to receive an event from the data server when data changes. Users can key off of this event to capture that data and route it to a database as desired. Since all tool manufacturers have different requirements for which database to use and how data is written to that database, CCF leaves the actual SQL (or equivalent) commands for writing the data to the equipment application developer.

With CCF Data collection and storage is … Elementary.

To learn more about CCF, visit the CIMControlFramework page on our website!

Topics: SECS/GEM, EDA/Interface A, Equipment Control-Software Products, Cimetrix Products

Designing Recipes in CCF

Posted by Derek Lindsey: Product Manager on Jan 24, 2017 11:00:00 AM

Anyone above a certain age will be able to tell you what you get when you combine two all-beef patties, special sauce, lettuce, cheese, pickles, onions – on a sesame seed bun. There are many who would argue that what sets a Big Mac apart from other burgers – and has made it one of the best-selling products of all time – is the special sauce.

In a March 2016 blog post, Cimetrix listed eleven points to be taken into consideration when starting an equipment control application using CIMControlFramework (CCF). One of the things to consider is how you want to provide process and path information through the tool using recipes. This blog post delves a little deeper into the recipe aspect of equipment control applications.

In CCF, recipes are either process recipes or sequence recipes.

Cookbook1.png

A process recipe contains the instructions to be carried out by a particular process module. These instructions can range from temperature settings to types of gas to flow. The most important aspect of any tool control application is allowing the tool manufacturer to do what they do best – perform their process better than anyone else in the world. The process recipe allows tool manufacturers to add their special sauce to the wafer. CCF provides a sample process recipe implementation as well as very simple process recipe editor. Since recipes are generally custom for each tool manufacturer, CCF application developers usually want to customize the recipe contents for a process recipe.

If the processing of material is the special sauce, the rest of the application, moving the wafer through the tool, is a necessary evil. To assist in moving material through the tool, CCF also provides a sequence recipe. A sequence recipe determines which process recipes are to be run, at which modules to run them, and the order in which this is to occur. CCF provides a sample sequence recipe editor that can be used in creating sequence recipes or customized for each tool manufacturer’s needs.

Both process and sequence recipes can be created on the tool or downloaded from a factory host. CCF provides a handler that receives recipes from the host and stores them in the Recipe Server. Regardless of where the recipes are created, CCF’s Recipe Server stores the recipes locally and passes them in to the scheduler when a job is to be run. The Recipe Server allows recipes to be stored as Engineering recipes while they are being finalized. They can then be promoted to Production recipes for use in a production environment. 

By making use of recipes in CCF, you can ensure that your special sauce is applied to material processing to help make your tool one of the best-selling in history.

To learn more about CCF, visit the CIMControlFramework page on our website! 

Topics: Equipment Control-Software Products, Cimetrix Products