Industry News, Trends and Technology, and Standards Updates

Continuous Flow Sample Added to Cimetrix CIMControlFramework

Posted by Derek Lindsey: Product Manager on Oct 27, 2021 11:14:00 AM

Cimetrix CIMControlFramework™ (CCF) is a software development kit (SDK) that enables users to design and implement a high-quality equipment control solution using provided components for supervisory control, material handling, operator interface, platform and process control, and automation requirements. CCF is built on the reliable Cimetrix connectivity products which provide GEM/GEM300/EDA interface functionality.

See previous series of blog posts on the functionality of CCF here.

While CCF does provide a built-in interface to handle GEM300 messages, CCF can be used just as effectively for building back-end and electronics equipment control applications handling the movement of chips and trays rather than wafers and carriers.black-red-chip-1

To demonstrate this ability, Cimetrix has added a continuous flow back-end sample as one of the fully working implementations provided with CCF. If you are already familiar with CCF, you will have seen the front-end Atmospheric and Vacuum cluster tool samples.

The continuous flow sample is different from these other samples as described below.

JEDEC input and output trays

For the Atmospheric and Vacuum samples, material is delivered as wafers in SEMI E87 carriers. For back-end and electronics markets, material is usually not in the form of a wafer and is not delivered in a carrier. For the Continuous Flow sample, the material is delivered on input trays and removed from the system on output trays. All trays used in the sample are similar to JEDEC trays, standard-defined trays for transporting, handling, and storing chips and other components. The trays have slots that can hold material in rows and columns. A JEDEC tray may appear as follows:

Integrated-circuits-tray-1The Continuous Flow sample allows users to specify the number of rows and columns in a tray using configuration parameters. The sample has two input trays and two output trays.

Continuous Flow

industrial-start-panel-1As the name of the Continuous Flow sample indicates, material is continually processed until there is no more material or until the user tells it to stop. The sample does not use SEMI E40 Process Jobs or SEMI E94 Control Jobs to determine how material is processed. Rather the user selects a recipe to use during processing and presses the Start button. Material will continue to be processed until the Stop button is pressed.

By default, the Continuous Flow sample will process all material from the first input tray and then all of the material from the second input tray. When an input tray becomes empty, the empty tray will be removed and replaced with a full one. Similarly, when an output tray becomes full, it is automatically removed and replaced with an empty one. This allows the processing to run continuously until stopped.

Scheduler

The Continuous Flow sample scheduler is different from the schedulers in the Atmospheric and Vacuum samples in that it is not dependent on Process Jobs or Sequence Recipes to know how to move material through the system. It simply picks the next input material and places it in the first available process slot. It then picks the next completed material and places it in the first available output slot.

Visualization

A new visualization was created for the Continuous Flow sample. Rather than using round material, SEMI E87 carriers, load ports, and wafer handling robots, the new visualization draws rectangular material that looks like chips that may arrive in JEDEC trays. Rather than trying to render a robot, the visualization renders a circular end effector that moves material through the system. The following screenshot displays what the sample visualization looks like while processing.

CCF_continuous_flow-1

In an upcoming version of CCF, the components of this visualization will be included in a visualization library that users can employ to customize their visualization more easily than has previously been possible in CCF.

Remote Commands

The Continuous Flow sample comes with three fully implemented remote commands that allow a host or host emulator to run the continuous flow sample. These commands are:

  • PP_SELECT – Specify the recipe to be used for processing material.
  • START – Start material processing using the selected recipe.
  • STOP – Don’t introduce new material to be processed and stop after all processed material has been sent to output trays.

The following shows the S2F49 remote command body for selecting the recipe as sent from Cimetrix EquipmentTest.

CCF_continuous-flow-2

Conclusion

We hope that the new Continuous Flow sample in CCF allows those who are creating semiconductor back-end or electronics equipment control solutions a great starting point for creating their applications. Please contact Cimetrix for additional information by clicking the button below.

Ask an Expert

Topics: Industry Highlights, Semiconductor Industry, Equipment Control-Software Products, Smart Manufacturing/Industry 4.0, Cimetrix Products

Announcing the Release of CIMControlFramework 6.0

Posted by Derek Lindsey: Product Manager on Apr 14, 2021 11:30:00 AM

The Cimetrix Connectivity Group of PDF Solutions is happy to announce that Cimetrix CIMControlFrameworkTM (CCF) version 6.0 is now available for download.

CCF is a software development kit (SDK) that enables users to design and implement a high-quality equipment control solution using provided components for supervisory control, material handling, operator interface, platform and process control, and automation requirements. CCF is built on the reliable Cimetrix connectivity products which provide GEM/GEM300/EDA interface functionality.

We have previously done a series of blog posts on the functionality of CCF. The same great functionality users have come to expect with CCF is still available, but in a cleaner, slicker, easier to use package.

Reorganized directory structure

In versions before CCF 6.0, core CCF packages (packages provided by CCF) were contained in the same directory as sample code and runtime files. This made it more difficult for CCF users to understand what code was required to be customized and what code was basic to CCF. (Note: you can still customize the basic CCF functionality, but it is not required.) In this release, we modified the directory structure to identify more clearly what is core CCF and what is sample or custom code. This is closer to the structure followed by CCF applications. The following diagram shows the new structure:

CCF-6.0-announcement-pic1In addition to clarifying CCF components, the new structure allows us to easily develop samples for additional equipment types. New samples will be added in future versions of CCF.

New WPF framework

Since CCF started providing a Windows Presentation Foundation (WPF) framework, we have received feedback on WPF features user would like added to the framework. Also, our engineers have continued to improve their WPF expertise, which has led to other improvements. CCF 6.0 includes the requested changes and best practice improvements in the new WPF framework. Some of these changes include:

  • Simplified hierarchy which makes it easier to understand which objects to inherit from.

CCF-6.0-announcement-pic2

  • Centralized style elements to allow users to change the look and feel (skin) of the operator interface to meet their needs.
  • Enhanced controls library that provides common controls for use in creating equipment control.
  • Increased E95 compliance, available with a configurable control panel.
  • Accelerated screen creation is possible with the change in hierarchy organization and the enhanced control library.
  • Richer set of native WPF screens. In earlier versions, CCF had several native WPF screens, but also had many screens created with WinForms and hosted in WPF. CCF 6.0 has all native WPF screens in the WPF sample operator interface. These screens can be reused, customized, or replaced. (Note: WinForms screens are also still available in CCF 6.0.)

The following image shows the main screen of the WPF operator interface for the CCF atmospheric equipment sample application. Most of the controls on the screen are available for use and customization by CCF developers.

CCF-6.0-announcement-pic3Updated samples

CCF has contained fully functional atmospheric and vacuum sample applications for many years. Over the years, we have improved the functionality for scheduling, simulation, and device interface interaction. However, the samples had always remained the same. With CCF 6.0, the atmospheric and vacuum sample applications were updated to take advantage of the other changes that have been made in CCF. These changes to the samples make them more useful in illustrating the proper use of CCF and providing a better starting point for creating custom applications.

Spring cleaning

CCF was originally released the summer of 2011 making it 10 years-old. Over the years, CCF has had several methods, objects and devices become obsolete. They were not removed from the product for backward compatibility reasons, but they were marked as obsolete. Because CCF 6.0 is a major release, we took the opportunity to do some spring cleaning and remove the obsolete items. CCF is now cleaner and tighter, and using it is much clearer.

Training material and upgrade guide

All the PowerPoint slides, lab documents, and corresponding solutions used for training developers on CCF have been updated for CCF 6.0. We have already successfully used the new training materials with a few customers to help them get started with their equipment control application development.

As part of CCF 6.0, we provide a CCF 5.10 to CCF 6.0 Upgrade Guide that contains detailed instructions on how to migrate applications created using previous versions of CCF to CCF 6.0.

Conclusion

We have been looking forward to the CCF 6.0 release for a long time and are excited for developers to get started using it. We are confident existing users will like the changes and that new users will have a good springboard in getting started with their equipment control application needs. We look forward to working with you and hearing from you.

Topics: Industry Highlights, Equipment Control-Software Products, Smart Manufacturing/Industry 4.0, Cimetrix Products

Semiconductor Back End Processes: Adopting GEM Judiciously

Posted by Brian Rubow: Director of Solutions Engineering on May 14, 2020 10:20:17 AM

Equipment Communication Leadership in Wafer Fabrication

For many years the semiconductor industry’s wafer fabrication facilities, where semiconductor devices are manufactured on [principally] silicon substrates, have universally embraced and mandated the GEM standard on nearly 100% of the production equipment. This includes the complete spectrum of front end of line (FEOL – device formation) and back end of line (BEOL – device interconnect) processes and supporting equipment. Most equipment also implement an additional set of SEMI standards, often called the “GEM 300” communication standards because their creation and adoption coincided with the first 300mm wafer manufacturing. Interestingly, there are no features in these standards specific to a particular wafer size.shutterstock_405869995_backend

Together, the GEM and GEM 300 standards have enabled the industry to process substrates in fully automated factories like Micron demonstrates in this video and GLOBALFOUNDRIES demonstrates in this video.

Specifically, the GEM 300 standards are used to manage the following crucial steps in the overall fabrication process:

  • automated carrier delivery and removal at the equipment
  • load port tracking and configuration
  • carrier ID and carrier content (slot map) verification
  • job execution where a recipe is assigned to specific material
  • remote control to start jobs and respond to crisis situations (e.g., pause, stop or abort processing)
  • material destination assignment after processing
  • precise material location tracking and status monitoring within the equipment
  • processing steps status reporting
  • overall equipment effectiveness (OEE) monitoring

Additionally, the GEM standard enables

  • the collection of unique equipment data to feed numerous data analysis applications such as statistical process control
  • equipment-specific remote control
  • alarm reporting for fault detection
  • interaction with an equipment operator/technician via on-screen text
  • preservation of valuable data during a communication failure

Semiconductor Back End Process Industry Follows the Lead

After wafer processing is completed, the wafers are shipped to a semiconductor back end manufacturing facility for packaging, assembly, and test. Historically this industry segment has used GEM and GEM 300 sporadically but not universally. This is now changing.

In North America, SEMI created a new task force called “Advanced Back end Factory Integration” (ABFI) to organize and facilitate this industry segment’s implementation of more robust automation capabilities. To this end, the task force is charged with defining GEM and GEM 300 support in back end equipment, including processes such as bumping, wafer test, singulation, die attach, wire bonding, packaging, marking, final test and final assembly. As its first priority, the task force has focused on updating the SEMI E142 standard (Substrate Mapping) to enhance wafer maps to report additional data necessary for single device traceability. Soon the task force will shift its focus to define GEM and GEM 300 back end use cases and adoption more clearly.

Why GEM?

GEM was selected for several reasons.

  • A lot of the equipment in the industry already have GEM interfaces.
  • GEM provides two primary forms of data collection that are suitable for all data collection applications. This includes the polling of equipment and process status information using trace reports where the factory can collect selected variables at any frequency. Additionally, collection event reports allow a factory system to subscribe to notifications of just the collection events it is interested in, and to specify what data to report with each those collection events.
  • Most of the equipment suppliers have GEM experience either from implementing GEM on the back end equipment or from implementing GEM on their frontend equipment.
  • Factories can transfer experienced engineers from semiconductor frontend facilities into the back end with the specific goal of increasing back end automation.
  • GEM has proven its flexibility to support any type of manufacturing equipment. GEM can be implemented on any and all equipment types to support remote monitoring and control.
  • GEM is a highly efficient protocol, publishing only the data that is subscribed to in a binary format that minimizes computing and network resources.
  • GEM is self-describing. It takes very little time to connect to an equipment’s GEM interface and collect useful data.
  • GEM can be used to control the equipment, even when there are special features that must be supported. For example, it is straightforward to provide custom GEM remote commands to allow the factory to determine when periodic calibrations and cleaning should be performed to keep equipment running optimally.

Improved Overall Equipment Effectiveness Tracking

The ABFI task force has already proposed some changes to the SEMI E116 standard (Specification for Equipment Performance Tracking, or EPT). EPT is one of several standards that can be implemented on a GEM interface to provide additional standardized performance monitoring behavior beyond the GEM message set. This standard already enables reporting when equipment and modules within the equipment are IDLE, BUSY and BLOCKED. A module might be a load port, robot, conveyor or process chamber. When BUSY, this standard requires reporting what the equipment or module is doing. When BLOCKED, this standard requires reporting why the equipment or module is BLOCKED.

After analyzing the requirements of the back end industry segment, the task force decided to adopt and enhance the EPT standard. For example, the current EPT standard does not make any distinction between scheduled and unscheduled downtime. However, a few minor changes to E116 would allow the factory to notify the equipment when downtime is scheduled by the factory, greatly enhancing the factory’s ability to track overall equipment effectiveness and respond accordingly.  

Additional Future Work

Many of the GEM 300 standards can be applied to some of the back end equipment when applicable and beneficial. The task force is defining specific functional requirements and evaluation criteria to make these determinations and publish the resulting recommendations in a new standard. Representatives from several advanced back end factories are already closely involved in this work, but more participation is always welcome. For more information, click the button below!

Contact Us

Topics: Industry Highlights, SECS/GEM, Semiconductor Industry, Customer Support, Doing Business with Cimetrix, Cimetrix Products

The Convergence of Technologies and Standards in Smart Manufacturing Blog

Posted by Ranjan Chatterjee on Apr 22, 2020 11:45:00 AM

Feature by Ranjan Chatterjee, CIMETRIX
and Daniel Gamota, JABIL

Abstract

The vertical segments of the electronic products manufacturing industry (semiconductor, outsourced system assembly, and test, and PCB assembly) are converging, and service offerings are consolidating due to advanced technology adoption and market dynamics. The convergence will cause shifts in the flow of materials across the supply chain, as well as the introduction of equipment and processes across the segments. The ability to develop smart manufacturing and Industry 4.0 enabling technologies (e.g., big data analytics, artificial intelligence (AI), cloud/edge computing, robotics, automation, IoT) that can be deployed within and between the vertical segments is critical. The International Electronics Manufacturing Initiative (iNEMI) formed a Smart Manufacturing Technology Working Group (TWG) that included thought leaders from across the electronic products manufacturing industry. The TWG published a roadmap that included the situation analysis, critical gaps, and key needs to realize smart manufacturing.Article First Posted by SMT007 Magazine

Introduction

The future of manufacturing in the electronics industry is dependent on the ability to develop and deploy suites of technology platforms to realize smart manufacturing and Industry 4.0. Smart manufacturing technologies will improve efficiency, safety, and productivity by incorporating more data collection and analysis systems to create a virtual business model covering all aspects from supply chain to manufacturing to customer experience. The increased use of big data analytics and AI enables the collection of large volumes of data and the subsequent analysis more efficient. By integrating a portfolio of technologies, it has become possible to transition the complete product life cycle from supplier to customer into a virtual business model or cyber-physical model. Several industry reports project manufacturers will realize tens of billions of dollars in gains by 2022 after deploying smart manufacturing solutions. In an effort to facilitate the development and commercialization of the critical smart manufacturing building blocks (e.g., automation, machine learning, or ML, data communications, digital thread), several countries established innovation institutes and large R&D programs. These collaborative activities seek to develop technologies that will improve traceability and visualization, to enable realtime analytics for predictive process and machine control, and to build flexible, modular manufacturing equipment platforms for highmix, low-volume product assembly.

The vertical segments of the electronic products manufacturing industry (semiconductor (SEMI), outsourced system assembly, and test (OSAT), and printed circuit board assembly (PCBA) are converging, and service offerings are being consolidated. This occurrence is due to the acceleration of technology development and the market dynamics, providing industry members in specific vertical segments an opportunity to capture a greater percentage of the electronics industry’s total profit pool.

The convergence of the SEMI, OSAT, and PCBA segments will cause shifts in the flow of materials across the supply chain, as well as the introduction of equipment and processes across the segments (e.g., back-end OSAT services offered by PCBA segment). OSAT services providers are using equipment and platforms typically found in semiconductor back-end manufacturing, and PCBA services providers are installing equipment and developing processes similar to those used by OSAT.

The ability to develop smart manufacturing technologies (e.g., big data analytics, AI, cloud/ edge computing, robotics, automation, IoT) that can be deployed within the vertical segments as well as between the vertical segments is critical. In addition, the ability to enable the technologies to evolve unhindered is imperative to establish a robust integrated digital thread.

As the electronic products manufacturing supply chain continues to evolve and experience consolidation, shifts in the traditional flow of materials (e.g., sand to systems) will drive the need to adopt technologies that seamlessly interconnect all facets of manufacturing operations. The iNEMI Smart Manufacturing TWG published a roadmap that would provide insight into the situation analysis and key needs for the vertical segments and horizontal topics (Figure 1) [1].

Horizontal-topics-across-vertical-segments

In this roadmap, the enabling smart manufacturing technologies are referred to as horizontal topics that span across the electronics industry manufacturing segments: security, data flow architecture, and digital building blocks (AI, ML, and digital twin).

The three electronics manufacturing industry segments SEMI, OSAT, and PCBA share some common challenges:•

  • Responding to rapidly changing, complex business requirements
  • Managing increasing factory complexity
  • Achieving financial growth targets while margins are declining
  • Meeting factory and equipment reliability, capability, productivity, and cost requirements
  • Leveraging factory integration technologies across industry segment boundaries
  • Meeting the flexibility, extendibility, and scalability needs of a leading-edge factory
  • Increasing global restrictions on environmental issues

These challenges are increasing the demand to deploy, enabling smart manufacturing solutions that can be leveraged across the verticals.

Enabling Smart Manufacturing Technologies (Horizontal Topics): Situation Analysis

Many of the challenges may be addressed by several enabling smart manufacturing technologies (horizontal topics) that span across the electronics industry manufacturing segments: security, data flow, and digital building blocks. The key needs for these are discussed as related to the different vertical segments (SEMI, OSAT, and PCBA) and the intersection between the vertical segments.

Members of the smart manufacturing TWG presented the attribute needs for the following: security, data flow, digital building blocks, and digital twin. Common across the vertical segments is the ability to develop and deploy the appropriate solutions that allow the ability to manufacture products at low cost and high volume. Smart manufacturing is considered a journey that will require hyper-focus to ensure the appropriate technology foundation is established. The enabling horizontal topics are the ones that are considered the most important to build a strong, agile, and scalable foundation.

Security Security is discussed in terms of two classes: physical and digital. The tools and protocols deployed for security is an increasingly important topic that spans across many industries and is not specific only to the electronics manufacturing industry. Security is meant to protect a number of important assets and system attributes that may vary according to the process (novel and strong competitive advantage) and perceived intrinsic value of the intellectual property (IP).

In some instances, it directly addresses the safety of workers, equipment, and the manufacturing process. In other cases, it transitions toward the protection of electronic asset forms, such as design documents, bill of materials, process, business data, and others. A few key considerations for security are access control [2], data control [3], input validation, process confidentiality, and system integrity [2].

At the moment in manufacturing, in general, IT security issues are often only raised reactively once the development process is over and specific security-related problems have already occurred. However, such belated implementation of security solutions is both costly and also often fails to deliver a reliable solution to the relevant problem. Consequently, it is deemed necessary to take a comprehensive approach as a process, including implementation of security threat identification and risk analysis and mitigation cycles on security challenges.

Data Flow

General factory operations and manufacturing technologies (i.e., process, test, and inspection) and the supporting hardware and software are evolving quickly; the ability to transmit and store increasing volume of data for analytics (AI, ML, predictive) is accelerating. Also, the advent and subsequent growth of big data are occurring faster than originally anticipated. This trend will continue highlighting existing challenges and introducing new gaps that were not considered previously (Figure 2).

As an example, data retention practices must quickly evolve; it has been determined that limitations on data transmission volume and length of data storage archives will disappear (e.g., historical data retention of “all” will become standard practice). Examples of data flow key considerations are data pipes, machine-tomachine (M2M) communication, and synchronous/ asynchronous data transmission.

A flexible, secure, and redundant architecture for data flow and the option considerations (e.g., cloud, fog, versus edge) must be articulated. The benefits and risks must be identified and discussed. Data flow and its ability to accelerate the evolution of big data technologies will enable the deployment of solutions to realize benefits from increases in data generation, storage, and usage. These capabilities delivering higher data volumes at real-time and nearreal- time rates will increase the availability of equipment parameter data to positively impact yield and quality. There are several challenges and potential solutions associated with the increases in data generation, storage, and usage; capabilities for higher data rates; and additional equipment parameter data availability.

The primary topics to address are data quality and incorporating subject-matter expertise in analytics to realizing effective on-line manufacturing solutions. The emergence of big data in electronics manufacturing operations should be discussed in terms of the “5 Vs Framework”:

  1. Volume
  2. Velocity
  3. Variety (or data merging)
  4. Veracity (or data quality)
  5. Value (or application of analytics)

The “5 Vs” are foundational to appreciate the widespread adoption of big data analytics in the electronics industry. It is critical to address the identified gaps—such as accuracy, completeness, context richness, availability, and archival length—to improve data quality to support the electronics manufacturing industry advanced analytics [4].

connectivity-architecture-smart-manufacturing-functionality

Digital Building Blocks

The advancements in the development of digital building blocks (interconnected digital technologies) are providing digitization, integration, and automation opportunities to realize smart manufacturing benefits. These technologies will enable electronics manufacturing companies to stay relevant as the era of the digitally- connected smart infrastructure is developed and deployed. Several technologies considered fundamental digital building blocks are receiving increased attention in the electronics manufacturing industry (e.g., AI, ML, augmented reality, virtual reality, and digital twin).

AI and ML

AI and ML tools and algorithms can provide improvements in production yields and quality. These tools and algorithms will enable the transformation of traditional processes and manufacturing platforms (processes, equipment, and tools). The situation analysis for AI and ML, as well as their enablers, typically consider the following features and operational specifications: communications at fixed frequency, commonality analysis, material and shipment history and traceability, models for predicting yield and performance, predefined image processing algorithms, secure gateway, warehouse management systems.

AI and ML present several opportunities to aggregate data for the purpose of generating actionable insights into standard processes. These include, but are not limited to, the following:

  1. Preventive maintenance: Collecting historical data on machine performance to develop a baseline set of characteristics on optimal machine performance, and to identify anomalies as they occur.
  2. Production forecasting: Leveraging trends over time on production output versus customer demand, to more accurately plan production cycles.
  3. Quality control: Inspection applications can leverage many variants of ML to fine-tune ideal inspection criteria. Leveraging deep learning, convolutional neural networks, and other methods can generate reliable inspection results, with little to no human intervention.
  4. Communication: It is important for members of the electronics manufacturing industry to adopt open communication protocols and standards [5–8].

Digital Twin Technology

The concept of real-time simulation is often referred to as the digital twin. Its full implementation is expected to become a requirement to remain cost-competitive in legacy and new facility types. Digital twin will initially be used to enable prediction capabilities for tools and process platforms that historically cause the largest and most impactful bottlenecks. The ultimate value of the digital twin will depend on its ability to continue to evolve by ingesting data and the availability of data with the “5 Vs”: veracity, variety, volume, velocity, and value. The situation analysis of the digital twin within and between electronics industry manufacturing segments highlight the following data considerations: historical, periodic, and reactive.

The concept of a digital twin lends itself to on-demand access, monitoring and end-toend visualization of production, and the product lifecycle. By simulating production floors, a factory will be able to assess attainable projected KPIs (and what changes are required to attain them), forecast production outputs, and throughputs through a mix of cyber-physical realities (the physical world to the virtual world, and back to the physical world), and expedite the deployment of personnel and equipment to manufacturing floors worldwide.

Enabling Smart Manufacturing Technologies (Horizontal Topics): Key Attribute Needs

Security

Security will continue to be a primary concern as the electronics manufacturing industry adopts technologies and tools that rely on ingested data to improve manufacturing quality and yield and offer differentiated products at a lower cost and higher performance. SEMI members generated a survey to appreciate the needs, challenges, and potential solutions for security in the industry and its supply chain and gather more comprehensive input from the industry in terms of users, equipment and system suppliers, security experts, and security solution providers [9]. It is a topic that permeates many facets of manufacturing: equipment, tools, designs, process guidelines, materials, etc. Processes continue to demand a significant level of security to minimize valuable know-how IP loss; this requirement will generate the greatest amount of discussion such as data partitioning, production recipes, equipment, and tool layout. A few key attribute needs for security are network segmentation [10], physical access, and vulnerability mitigation.

These security issues are not unique to microelectronics manufacturing, and many of the issues go beyond manufacturing in general. The topic of security should reference the challenges and potential solutions across the manufacturing space. As an example, the IEC established an Advisory Committee on Information Security and Data Privacy [11figuredfdafdfd. It is suggested to collaborate with other standards and industry organizations that are developing general manufacturing security roadmaps by delineating specific microelectronics manufacturing issues and focusing on common needs.

Data Flow

The development of a scalable architecture that provides flexibility to expand; connect across the edge, the fog, and the cloud; and integrate a variety of devices and systems generating data flow streams is critical. A smart factory architecture may, for example, accommodate the different verticals in the electronics manufacturing industry as well as companies in non-electronics manufacturing industries.

As mentioned previously, different industries seeking to deploy smart manufacturing technologies should leverage architectures thatprovide the desired attributes; data flow architecture is considered a prime candidate for leveraging and cross-industry collaboration to identify optimum solutions (i.e., data synchronizers, execution clients).

The development and deployment of technologies for data flow are accelerating. Focus on data analytics, and data retention protocols are increasing at a faster rate than first anticipated. It is imperative to collect the critical data as well as to establish guidelines to perform intelligent analysis and to exercise the appropriate algorithms to specify data-driven decisions. Several topics related to data are under consideration, such as general protocols:

  • “All” versus “anomaly” data retention practices
  • Optimization of data storage volumes
  • Data format guidelines for analytics to drive reactive and predictive technologies
  • Data quality protocols enabling improvements in time synchronization, compression/uncompression, and blending/merging
  • Guidelines to optimize data collecting, transferring, storing, and analyzing

Data considerations for equipment are:

  • Defining context data sets for equipment visibility
  • Improving data accessibility to support functions
  • Data-enabled transition from reactive to redictive functionality
  • Data visibility of equipment information (state, health, etc.)

Digital Building Blocks

The ability to deploy the necessary digital building blocks to realize smart manufacturing is at different stages of maturity.

AI and ML

A few key attribute needs for AI and ML are data communication standards, data formatting standards, and 3PL tracking solutions. Technologies, such as AI and ML, are seen as enablers to transition to a predictive mode of operation: predictive maintenance, equipment health monitoring, fault prediction, predictive scheduling, and yield prediction and feedback. This paradigm in AI-enhanced control systems architectures will enable the systems to “learn” from their environment by ingesting and analyzing large data sets. Advanced learning techniques will be developed that improve adaptive model- based control systems and predictive control systems. The continued development and assessment of AI and ML technologies is critical to establish the most robust and well-tuned prediction engines that are required to support emerging production equipment.

Digital Twin Technology

Advances in digital twin technologies are accelerating as the potential benefits are communicated to end-users. Also, the costs for enabling technologies (hardware and software platforms) are becoming less expensive. The following are considered key attribute needs that will increase adoption and broad-based deployment of the digital twin (product design, product manufacturing, and product performance: digital thread, predictive, prescriptive, and systemwide continuous data access.

Digital twin is a long-term vision that will depend on the implementation of discrete prediction capabilities (devices, tools, and algorithms) that are subsequently integrated on a common prediction platform. It is generally considered that the digital twin will provide a real-time simulation of facility operations as an extension of the facility operations system.

The successful deployment of digital twin in a facility environment will require high-quality data (e.g., accuracy, velocity, dynamic updating) to ensure the digital twin is an accurate representation of the real-time state of the fab. Also, the realization of this vision will depend on the ability to design an architecture that provides the key technologies to operate collaboratively by sharing data and capabilities. Ultimately, the success of the digital twin will depend on the ability to develop a path for implementation that provides redundancy and several risk assessment gates.

Prioritized Research, Development, and Implementation Needs

The topic of collaboration is often mentioned in industry-led initiatives as a key element to realize the benefits attributed to smart manufacturing. There is a strong drive by members of the electronics manufacturing industry to engage in activities that foster collaboration. Participants in these activities recognize that solutions must be consensus-based and adopted by many vendors. Equipment suppliers appreciate that deep domain knowledge combined with data analysis contributes to only a fraction of the potential value that can be captured. The optimal value will be realized when data is shared across manufacturing lines in facilities, with vertical segment industry supply chain members and across vertical segments.

Example prioritized research, development, and implementation needs topics are as follows:

  • Define data flow standard interfaces and data formats for all equipment and tools
  • Investigate if data flow continuity between vertical segments should be mandatory or optional
  • Determine optimal operation window for the latency of data versus process flow and quantify permissible latency for data flow when used to determine process go/no-go
  • Investigate data security and encryption requirements when sharing common process tools versus isolating process equipment between vertical segments
  • Develop open and common cross-vertical-segments communication standards and protocols for equipment

Gaps and Showstoppers

There is universal agreement that digitization will drive huge growth in data volumes. Many predict that cloud and hybrid cloud solutions are critical to enable the storage and subsequent manipulation of data by AI algorithms to derive value. However, industry members must adopt consensus-based standards and guidelines for connectivity protocols and data structures (Figure 4). Smart manufacturing is a journey, and a robust and scalable connectivity architecture must be established on which to deploy digital building blocks (e.g., AI, ML to extract the optimal value from the data). 

cross-segments-standard-equipment-connectivity-smart-manufacturing

Example critical gaps that could significantly impact the progress of the deployment and adoption of smart manufacturing are:

  • Undefined data security between vertical segments
  • Lack of machine interface standardization for data flow
  • Undefined data formats for data flow
  • Data vulnerability when security is breached
  • Robust and scalable connectivity architecture across electronics vertical segments to enabling smart manufacturing functionality (event and alarm notification, data variable collection, recipe management, remote control, adjustment of settings, interfacing with operators, etc.)

Summary

The iNEMI Smart Manufacturing Roadmap Chapter provides the situation analysis and key attribute needs for the horizontal topics within the vertical segments as well as between the vertical segments. Also, the chapter identifies the primary gaps and needs for the horizontal topics that must be addressed to enable the realization of smart manufacturing:

  • Definitions: Smart manufacturing, smart factory, Industry 4.0, AI, ML, etc.
  • Audits for smart manufacturing readiness: Develop consensus-based documentation, leverage published documents (e.g., Singapore Readiness Index [12])
  • Security: Best practices, physical, digital, local and remote access, etc.
  • Equipment diversity and data flow communications: Old, new, and mixture
  • Data attribute categorization and prioritization: Volume, velocity, variety, veracity, and value
  • Cost versus risk profile versus ROI
  • Talent pool (subject-matter experts): Data and computer scientists, manufacturing engineers, and automation
  • Standards and guidelines: Data formats and structures, communication protocols, and data retention
  • Open collaboration: SEMATECH 2.0

The gaps and needs that were identified for addressing require additional detail for the status of the different vertical segments to appropriately structure the initiatives. It was suggested to circulate surveys to gather the information to appreciate the issue. One survey format was suggested as an example template: Manufacturing Data Security Survey for IRDS FI Roadmap [13].

iNEMI, together with other organizations, such as SEMI, can organize workshops to facilitate collaboration between the electronics manufacturing industry stakeholders. In addition, iNEMI can establish cross-industry collaborative projects that can develop the enabling technologies to address the roadmap identified needs and gaps to realize smart manufacturing.

Further, organizations, such as iNEMI and SEMI, can collaborate to establish guidelines and standards (e.g., data flow interfaces and data formats) as well as lead groups to develop standards for equipment and tool hardware to reduce complexity during manufacturing. Also, iNEMI can engage other industry groups to foster the exchange of best practices and key knowledge from smart manufacturing initiatives.

The members of the roadmap TWG are committed to provide guidance during the smart manufacturing journey—people, processes, and technologies. Members of the TWG also suggested engaging microelectronics groups as well as non-microelectronics groups to assess opportunities to leverage existing smart manufacturing guidelines and standards.

Acknowledgments

Thank you to the members of the iNEMI Smart Manufacturing TWG. Their dedication, thought leadership, and deep appreciation for SMT enabling technologies was critical to preparing the roadmap chapter.

In addition, we would like to thank the participants and facilitators of the SEMI Smart Manufacturing Workshop—Practical Implementations and Applications of Smart Manufacturing (Milpitas, California, on November 27, 2018). SMT007

References

1. 2019 iNEMI Roadmap.
2. U.S. National Institute Standard and Technology’s Special Publication 800-82.
3. U.S. National Institute Standard and Technology’s Special Publication 800-171.
4. IEEE International Roadmap for Devices and Systems, Factory Integration.
5. Japan Robot Association’s Standard No. 1014.
6. SEMI E30-0418, Generic Model for Communications and Control of Manufacturing Equipment (GEM); SEMI A1-0918 Horizontal Communication Between Equipment; SEMI E5-1217, Communications Standard 2 Message Content (SECS-II); SEMI E4-0418, Equipment Communications Standard 1 Message Transfer (SECS-I).
7. Hermes Standard.
8. IPC-CFX Standard.
9. J. Moyne, S. Mashiro, and D. Gross, “Determining a Security Roadmap for the Microelectronics Industry,” 29th Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC), pp. 291–294, 2018.
10. IEC 62443 3-2.
11. website: iec.ch/acsec.
12. EDB Singapore, “The Singapore Smart Industry Readiness Index,” October 22, 2019.
13. www.surveymonkey.com/r/ZXLS6LH.

Ranjan Chatterjee is vice president and general manager, smart factory business, at Cimetrix.

Dan Gamota is vice president, manufacturing technology and innovation, at Jabil.


Article First Posted by SMT007 Magazine

Feature by Ranjan Chatterjee, CIMETRIX
and Daniel Gamota, JABIL

Editor’s note: Originally titled, “The Convergence of Technologies and Standards Across the Electronic Products Manufacturing Industry (SEMI, OSAT, and PCBA) to Realize Smart Manufacturing ” this article was published as a paper in the Proceedings of the SMTA Pan Pacific Microelectronics Symposium and is pending publication in the IEEE Xplore Digital Library.

 

 

 

Topics: Industry Highlights, SECS/GEM, Customer Support, Doing Business with Cimetrix, Cimetrix Products

Leveraging Cimetrix EquipmentTest to Develop a Reliable SMT-ELS Interface

Posted by Jesse Lopez: Software Engineer on Oct 31, 2019 12:45:00 PM

Recently, I had the opportunity to participate in the development, testing, and integration of the Cimetrix ELS library that encompasses the SEMI A1, A1.1, and A2 (SMT-ELS) standards. It’s been exciting to see how ELS has increasingly been embraced as a connectivity solution for electronic manufacturing equipment.

I was first introduced to the SMT-ELS standard in June 2019 by Alan Weber (VP, New Product Innovations, Cimetrix). To begin, I obtained a functioning ELS implementation from Siemens Japan as well as the needed hardware. To make sure I fully understood ELS, I attended a 2-day class presented by Siemens and began studying the ELS standard and the Siemens ELS implementation.

It took a significant amount of time to get familiar with Siemens Implementation and gain an understanding of what they did to support the ELS standard. Siemens Japan has done a great job with their SEMI SMT-ELS implementation, and their assistance with my efforts is greatly appreciated. Once I felt familiar enough with ELS, I built a SMEMA interface driver to simulate the conveyor signals.

Using the SMT-ELS communications library, the Cimetrix development team designed a sample equipment application which I was able to use for initial connectivity testing. At first, it was fairly difficult to get the two libraries to communicate. However, when I used the Cimetrix EquipmentTestTM software, I was able to find defects in our library, which were quickly and easily resolved by our development team. 

While it was beneficial to have a known ELS implementation to test against, it is now clear how valuable using a testing tool would be for anyone creating or validating their own SEMI SMT-ELS implementation.

Even though the SEMI A1, A1.1 and A2 standards are not long, they are dense. As adoption of these standards increases, it becomes paramount that equipment manufactures can test their SMT-ELS implementations during development. It is not effective or efficient for equipment manufacturers to test against other equipment as their primary form of testing. This is why the Cimetrix EquipmentTest SMT-ELS plug-in is so valuable.

I am currently working on test are written in C# and the code is easy to follow. The tests are split into two categories; one for horizontal communication between equipment ,and vertical communication to a factory system.

Horizontal Tests

For Panel Transfer verification, EquipmentTest connects to the first and last equipment in the line. This allows EquipmentTest to send messages to the first equipment and validate the format and content of the message from the last equipment. HCConnectionDiagram-1-1

For this test, the user defines the panel parameters. The panel is sent to the first equipment. Once the last equipment in the line sends the panel to EquipmentTest, the Material Data Content is verified. 

In addition to actual tests, EquipmentTest can be used to send user defined atomic messages such as SetMDMode.

Vertical Tests

EquipmentTest Connects directly to the vertical port of the equipment. Using EquipmentTest, I can set and validate the Net Configuration.

The EquipmentTest software has been pivotal in developing and test our SMT-ELS Implementation. A demonstration of EquipmentTest SMT-ELS and the Cimetrix EquipmentConnectTM SMT-ELS software will be given at Productronica from November 12-15, 2019 in Munich, Germany. Please drop by our booth any time, or feel free to set up an appointment in advance. We look forward to meeting with you and discussing your ELS needs!

Meet with Us

 

Topics: Industry Highlights, Doing Business with Cimetrix, Smart Manufacturing/Industry 4.0, Cimetrix Products, SMT/PCB/PCBA

EDA Best Practices Series: Choose to Provide E164-Compliant Models

Posted by Derek Lindsey: Product Manager on Aug 28, 2019 11:42:00 AM

In the EDA Best Practices blog series, we have discussed choosing a commercial software platform, using that package to differentiate your data collection capabilities and how to choose what types of data to publish. In this post we will review why you should choose to provide an E164-compliant equipment model.

What is E164?

Equipment Data Acquisition (EDA) - also referred to as Interface A - offers semiconductor manufacturers the ability to collect a significant amount of data that is crucial to the manufacturing process. This data is represented on the equipment as a model, which is communicated to EDA clients as metadata sets. The metadata, based upon the SEMI E125 Specification for Equipment Self-Description, includes the equipment components, events, and exceptions, along with all the available data parameters.

Since the advent of the SEMI EDA standards, developers and fabs have recognized that equipment models, and the resulting metadata sets, can vary greatly. It is possible to create vastly different models for similar pieces of equipment and have both models be compliant with the EDA standards. This makes it difficult for the factories to know where to find the data they are interested in from one type of equipment to another.

Recognizing this issue, the early adopters of the EDA standards launched an initiative in to make the transition to EDA easier and ensure consistency of equipment models and metadata from equipment to equipment. This effort resulted in the E164 EDA Common Metadata standard, approved in July 2012. Another part of this initiative was the development of the Metadata Conformance Analyzer (MCA), which is a utility that tests conformance to this standard. With this specification, equipment modeling is more clearly defined and provides more consistent models between equipment suppliers. This makes it easier for EDA/Interface A users to navigate models and find the data they need.

Power of E164

The E164 standard requires strict name enforcement for events called out in the GEM300 SEMI standards. It also requires that all state machines contain all of the transitions and in the right order as those called out in the GEM300 standards. This includes state machines in E90 for substrate locations and in E157 for process management. The states and transition names in these state machines must match the names specified in the GEM300 standards.

These requirements may seem unnecessarily strict, but implementing the common metadata standard results in:

  • Consistent implementations of GEM300
  • Commonality across equipment types
  • Automation of many data collection processes
  • Less work to interpret collected data
  • Ability for true “plug and play” applications
  • Major increases in application software engineering efficiency

Knowing that a model is E164 compliant allows EDA client applications to easily and programmatically define data collection plans knowing that the compliant models must provide all of the specified data with the specified names. For example, the following application is able to track carrier arrival and slotmap information as well as movement of material through a piece of equipment and process data for that equipment.eda-best-practice-e164-1

This application will work for any GEM300 equipment that is E164 compliant. The client application developer can confidently create data collection plans for these state machines, knowing that an E164-compliant model must provide the needed state machines and data with the proscribed names.

Decide to be E164 compliant

A number of leading semiconductor manufacturers around the globe have seen the power of requiring their equipment suppliers to provide EDA/E164 on their equipment, and now require it in their purchase specifications.

If you are a semiconductor manufacturer, you should seriously consider doing the same because it will greatly simplify data collection from the equipment (and most of your candidate suppliers probably have an implementation available or underway.

If you are an equipment supplier and your factory customers have not required that your EDA models be E164 compliant, you should still seriously consider providing this capability anyway as a way to differentiate your equipment. Moveover, E164-compliant models are fully compliant with all other EDA standards. Finally, it is much easier and more cost effective to create E164-compliant models from the outset than it is to create non-compliant models and then convert to E164 when the factory requires it.

Conclusion

The purpose of the E164 specification is to encourage companies developing EDA/Interface A connections to implement a more common representation of equipment metadata. By following the E164 standard, equipment suppliers and factories can establish greater consistency from equipment to equipment and from factory to factory. That consistency will make it easier and faster for equipment suppliers to provide a consistent EDA interface, and for factories to develop EDA client applications.

Contact Us

Topics: Industry Highlights, EDA/Interface A, Doing Business with Cimetrix, Smart Manufacturing/Industry 4.0, Cimetrix Products, EDA Best Practices

EDA Best Practices Series: Specifying and Measuring Performance and Data Quality

Posted by Alan Weber: Vice President, New Product Innovations on Aug 1, 2019 12:14:00 PM

The old adage “You get what you pay for” doesn’t fully apply to equipment automation interfaces… more accurately, you get what you require, and then what you pay for!

This is especially true when considering the range of capability that may be provided with an equipment supplier’s implementation of the EDA (Equipment Data Acquisition, also known as Interface A) standards. Not only is it possible to be fully compliant with the standard while delivering an equipment metadata model that contains very little useful information, the standards themselves are also silent on the topics of Performance and Data Quality.  So you must take extra care to state these requirements and expectations in your purchase specifications if you expect the resulting interface to support the demands of your factory’s data analysis and control applications. Moreover, to the extent these requirements can be tested, you should describe the test methods and tools that you will use in the acceptance process to minimize the chance of ugly surprises when the equipment is delivered.

We have covered the importance of and process for creating robust purchase specifications in a previous posting. This post will focus specifically on aspects of Performance and Data Quality within that context.

Scope of Performance and Data Quality Requirements

From a scope standpoint, Performance and Data Quality requirements are found in a number of sections in an automation specification. The list below is just a starting point suitable for any advanced wafer fab – your needs may extend and exceed these significantly.

Here are some sample requirements that pertain to the computing platform for the EDA interface software:

  • The interface computer should have the capability of a 4-core Intel i5 or i7 or better, with processing speed of 2+ GHz, 8 GB of RAM, and 500 GB of persistent storage with at least 50% available at all times.
  • The equipment must monitor key performance parameters of the EDA computing platform such as CPU utilization (%), memory utilization (GB, %), disk utilization (GB, %) and access rate, etc. using system utilities such as Perfmon (for Windows systems) and store this history either in a log file or in some part of the equipment metadata model.
  • The network interface card must support 1 GB per second (or faster) communications.

In the area of equipment model content, the following requirements are directly related to interface performance and data quality:

  • The equipment should make the EDA computing platform performance parameters available as parameters of an E120 logical element that represents the EDA interface software itself.
  • The supplier must provide a written description of the update rates, recommended sampling intervals, normal operating ranges and behaviors, and high/low/rate-of-change limits for all key process parameters. These will be used to design data quality filters in the data path between the equipment and the consuming applications/users.
  • Equipment parameters provided through the EDA interface must exhibit a number of data quality characteristics, including, but not limited to: an internal sampling/update rate sufficient to represent the underlying signal accurately; timing of trace reports that is consistent with the sampling interval within +/- 1.0%; values in adjacent trace reports must contain then-current values at the specified sampling interval; and rejection of obvious outliers.

Advanced users of the EDA standards are now raising their expectations for the equipment to provide self-monitoring and diagnosis capability in the form of built-in data collection plans (DCPs), as expressed in some of the following requirements:

  • The supplier must provide built-in DCPs to support common equipment performance monitoring, diagnostic, and maintenance processes that are well known to the supplier. Documentation for these DCPs must define their purpose, activation conditions, interface bandwidth consumed, and the types of analysis the collected data enables.
  • The supplier must describe the operating conditions that can lead to a PerformanceWarning situation for the EDA interface.
  • The supplier must describe the algorithms used to deactivate DCPs under PerformanceWarning conditions. These might include LIFO (i.e., the last DCP activated is the first to be deactivated), decreasing order of bandwidth consumed or “size” (in terms of total # of parameters and # of trace/event requests), etc.

Because of the power and complexity of the DCP structure defined in the EDA standards, it is not sufficient to specify the raw communications performance requirement as a small number of isolated criteria, such as total bandwidth (in parameters per second) or minimum sampling interval. Rather, since the EDA interface must support a variety of data collection client demands for a wide range of production equipment, these requirements should be expressed as combinations of sampling interval, # parameters per DCP, # of simultaneously active DCPs, group size, buffering interval, response time for ad hoc “one-shot” DCPs, maximum latency of event generation after the related equipment condition occurred, consistency of timestamps in trace reports with the specified sampling interval, and perhaps others.

Moreover, some equipment types may have more stringent performance requirements than others, depending on the criticality of timely data for the consuming applications… so there may be process-specific performance requirements as well.

Measurement and Testing

Methods for measuring and testing the above requirements should also be described in the purchase specifications so the equipment suppliers can know they are being successfully addressed during the development process and can demonstrate compliance before and after shipping the equipment. Clarity at this phase saves time and expense later on.

Examples of such requirements include:

  • The supplier must test the EDA interface across the full range of performance criteria specified above and provide reports documenting the results.
  • An earlier requirement states that the EDA interface must be capable of reporting at least 2000 parameters at a sampling interval of 0.1 seconds (10Hz) with a group size of 1, for a total data collection capacity (bandwidth) of 20,000 parameters per second. In addition to this overall bandwidth capability, the supplier must demonstrate that this performance is possible over a range of specific data collection deployment strategies, meaning different #s and sizes of DCPs, different sampling intervals, group sizes, etc. without causing the EDA interface to reach one of its “Performance Warning” states or overstress its computing platform. To this end, all combinations of the following data collection configuration settings must be run for at least 15 seconds each; assuming the equipment has n processing modules:
    • Trace intervals (in seconds): 1, 0.5, 0.2, 0.1 (and 0.05 if possible)
    • # of parameters per DCP: 10, 50, 100, 250, 500, 1000 (and 2000 if possible)
    • # of DCPs: 1, 2, 3, … to n
    • Group size: 10, 5, 2, 1
  • The test client should be run on a separate computing platform with sufficient computing power to “stay ahead” of the EDA interface computer; in other words, the EDA interface should never have to wait on the client system.
  • Test reports should indicate the start and stop time of each iteration (i.e., one combination of the above settings), and verify that the timestamps of the data collection reports sent by the EDA interface are within +/- 1% of the value expected if the samples were collected exactly at the specified trace interval.
Performance parameters of the EDA interface platform should also be monitored during the tests and included in the report. These parameters should include memory usage, CPU processing load, and disk access rate (and perhaps others) for all processes that constitute the EDA interface software.

This approach is shown in tabular form for a 2-chamber tool (see below); since Group Size does not (or should not) impact the effective parameters per second rate, it is not shown in the table.edabest-measure-1
  • A summary report for all performance tests that show acceptable message generation and transmission timing across the full range of data collection test criteria must be available.
  • Detailed SOAP logs for specific performance tests must be available on request.

In Conclusion

Red_smart_factory-TW

We hope you now have some appreciation for the importance of solid requirements in this area, and can accurately assess how well your current purchase specifications express your actual needs. If you want to know more about a well-defined process for improving your specifications, or have any other questions regarding the status and outlook of the EDA standards, and how they can be implemented, please contact us.

Contact Us

Topics: Industry Highlights, EDA/Interface A, Doing Business with Cimetrix, Smart Manufacturing/Industry 4.0, Cimetrix Products, EDA Best Practices

Cimetrix had a great showing at SEMICON Southeast Asia!

SEMICON-SEA-Asia-2018Cimetrix just finished exhibiting at SEMICON Southeast Asia for the first time. And a grand entrance it was. Located in Kuala Lumpur, Malaysia, this is one of the regional SEMICON shows put on by SEMI, a global industry association serving the manufacturing supply chain for the electronics industry. Southeast Asia is a hotbed for semiconductor backend and PCBA (SMT) industries. With our new employee Raymund Yeoh located in Penang, Malaysia; combined with our distribution partner Electrotek based out of Singapore, Cimetrix now has a strong presence to support Industry 4.0 adoption in Southeast Asia. 

semicon-sea-post-1-1By working closely with SEMI, Cimetrix had a new booth in the SEMI Smart Manufacturing Pavilion and an impressive demonstration in the SEMI Smart Manufacturing Journey.

Our new booth emphasized (1) our global reach as the world’s largest supplier of equipment connectivity and control software, (2) our new SapienceTM factory side platform which has beta installations at select major EMS and electronics manufacturing sites, and (3) our new EquipmentTestTM connectivity tester designed to make equipment connectivity easier than ever before.

booth-semi-sea-1Our booth was extremely busy the whole time with demonstrations of Sapience and EquipmentTest. We gave out vouchers for free copies of EquipmentTest to booth visitors which generated excitement and will increase learning for GEM connectivity in Southeast Asia. It was interesting to see the number of factory engineers and managers who visited us seeking help with getting their equipment connected for traceability and OEE (Overall Equipment Effectiveness). And we had the answers. Right next door to our booth was the SEMI Smart Manufacturing Journey which had guided tours demonstrating the use of Industry 4.0 throughout the electronics manufacturing supply chain. Our job was to demonstrate data collection using standards from live equipment in real time displaying OEE charts and data for each tour to witness. Setting this up can take months in a factory. Our Smart Factory Business Team is out to turn this problem upside down. They connected to all 4 live equipment in one day and were ready to go at the start of the show. And we are ready to do that in factories too. 

mike-semi-sea-tourHere is Mike and Jesse giving a demonstration to a tour group. The equipment is located right behind the crowd for all to see; with Sapience displaying data and the crowd taking pictures. SEMI did a great job organizing this. We had top government officials, factories, equipment manufacturers, electronics distributors and universities come through the tours. We also exceeded expectations by adding artificial intelligence to the demonstration. Amazon Alexa was integrated into Sapience which allowed us to ask Alexa which factory was most productive last week. Alexa and Sapience analyzed the data and gave the answer to the tour crowd.

We have many new opportunities to follow up; and we will be working with SEMI on how to help companies in Southeast Asia learn and adopt Industry 4.0.

Following the show, our team spread out to visit the rapidly growing Cimetrix customer base in Penang, Korea, India and China with support from our local teams. See you next year in SEMICON Southeast Asia!

Buy EquipmentTest Today

 

Topics: Doing Business with Cimetrix, Events, Global Services, Smart Manufacturing/Industry 4.0, Cimetrix Products

Do you need help with GEM Testing?

Posted by David Francis: Director of Product Management on May 22, 2019 11:21:00 AM

A few years ago, I went through the process of building a new house. It was exciting to work with the architect to design the house and imagine what the finished product was going to be like. The architect created a 40-page set of drawings detailing all the components that would go into the house, like the electrical, plumbing and flooring. I thought everything was covered. I was a little surprised when things didn’t go exactly as detailed in the drawings. There were exceptions! However, having the detailed drawings made it easier to identify where things went wrong and helped clarify what needed to be done to correct the problems.EquipmentTest-Software-Control

Communication standards like GEM are like a set of architectural drawings for how to connect equipment to factory control systems. They define what needs to be communicated, how the communication needs to take place and provide a great roadmap for getting there. But like building a new house, there are usually a few surprises along the way. A standard, consistent way of testing the interface that can be used by both the factory and equipment manufacturer, greatly reduces the unknown and simplifies the process.

The new Cimetrix EquipmentTest™ product is the fastest way to achieve GEM Compliance for factory acceptance testing of new equipment. Whether you are an equipment manufacturer or factory, making sure the equipment interface is GEM compliant is critical. Having an easy-to-use testing solution to determine if the equipment is GEM compliant is critical.

There are two versions of EquipmentTest depending on your needs. The EqupmentTest Basic version is ideal for both Smart factories and equipment manufacturers to quickly and easily test the basic capabilities of an equipment’s GEM interface. EquipmentTest Basic includes a simple testing scenario, called a plugin, to evaluate the equipment’s ability to connect to a GEM host and communicate events, data and alarms. This version also includes the ability to send/receive individual messages to/from the equipment for discovery or diagnostic purposes. With the messaging functionality, you can also create macros to send and receive groups of messages.

For more complex testing, there is the EquipmentTest Pro version. In addition to all the features of the EquipmentTest Basic version, EquipmentTest Pro includes a full, rigorous GEM compliance testing plug-in and an operational GEM compliance testing plugin. The Pro version includes development tools to allow you to create your own custom tests/plug-ins using .NET languages. The GEM compliance plugin generates a GEM compliance statement that shows the areas and level of compliance to the GEM standards. There are also other tools only available in the EquipmentTest Pro version that allow you easily test and interact with the GEM functionality on the equipment.

As with all our products, Cimetrix supports the industry connectivity standards so you never have to wonder if your equipment is keeping up with the rest of the industry.

You can purchase either version of EquipmentTest directly from our website and download the software immediately. You will need to provide a valid Mac ID and email address for licensing purposes. You will receive your license agreement no more than 48 hours after purchase. Be sure to learn more and get your EquipmentTest download today!

Buy EquipmentTest Today

Topics: Industry Highlights, SECS/GEM, Smart Manufacturing/Industry 4.0, Cimetrix Products

Multiple GEM Connections on Manufacturing Equipment

Posted by Brian Rubow: Director of Solutions Engineering on Apr 10, 2019 12:47:00 PM

The GEM standard is often incorrectly perceived as a single-connection protocol for manufacturing equipment. A single connection means that only one software product can use the GEM interface at one time. Many manufacturing equipment that support the GEM standard only have the ability for one connection. However, this limitation is set only in ignorance, by tradition, and to satisfy the common manufacturing system architecture. 

The truth is that the GEM standard simply does not discuss additional connections--meaning that additional connections are neither required nor prohibited. Not only is it possible for an equipment to support multiple concurrent GEM interfaces, this is becoming more and more common. If each supported GEM connection is point to point and complies with the GEM standard, this is certainly allowed. However, each connection should be completely independent of other GEM connections and still comply with the GEM requirements. Implementing multiple connections raises several questions. 

What does it mean for each GEM connection to be independent?

It means that each GEM host operates completely independently, as if the other GEM host connections were not present. Here is a more specific list of attributes that define “completely independent”:

  • The Communication state model is independent. Each can establish and disconnect independently from the other host packages.
  • The Control state model is independent. Each can be set up as local or remote as needed. 
  • Collection event report dynamic configuration is completely independent. Each host defines a unique set of reports and subscribes to a unique set of collection events. Even so, if two GEM host connections create identical reports and link them to the same collection event, then both should receive identical data. 
  • Each host subscribes to a unique set of alarms. 
  • Each host can query status information independently of any another.
  • Each host can choose to enable or disable Spooling and configure it as desired.
  • Each host can set up its own trace data collection.
  • Each host only receives messages based on its subscriptions.
  • Each host only sees reply messages to its primary messages.

Are you talking about HSMS-GS? 

No. HSMS-GS means implementing SEMI Standard E37.2, High Speed Message Service – General Session, an inactive SEMI standard. This standard, which never gained much industry traction, opens a single port through which any number of clients can connect. In contrast, I am talking about supporting multiple implementations of E37.1, High Speed Message Service – Single Session (HSMS-SS) where each connection uses a unique port number. Nearly all GEM interfaces today use the HSMS-SS protocol. 

What are the advantages of having multiple GEM connections in a single GEM interface? 

This opens the door for many useful applications. Here are three example configurations, and of course, all of them could be accomplished at the same time. 

  1. A factory can set up multiple host software packages at the same time to connect to the same equipment’s GEM interface, without any knowledge of or interference with each other. With only a single connection, a factory wanting to do the same thing has to implement some sort of GEM host broker to funnel the different GEM host package communications into a single GEM connection… a technically challenging feat. 01_GEMHost_v3
  2. If an equipment supplier wants to create an application designed specifically for its equipment running in a factory, they can use one of the GEM connections. They don’t have to replicate functionality into a custom interface. 02_GEMHost_v3
  3. If one equipment needs to monitor, control, or pass data directly to or from another equipment, this can be done using one of the GEM connections without interference to the factory GEM connection. This is relatively simple to set up. Sometimes this is called horizontal communication. Such communication can also be channeled through a host using the traditional vertical communication use case for a GEM interface. 03_GEMHost_v3

What about safety?

Typically, I would expect factories to set up one and only one connection in the GEM interface to be in the online-remote state and allowed to send remote commands. But this is not an absolute requirement. It is not difficult to imagine applications where execution of remote commands is distributed among multiple applications. For example, an equipment supplier might use one GEM connection to manage periodic recalibration of the equipment based the actual measured performance. 

What are the technical complications? 

There are a few. 

  • Because each connection uses a separate port number, the GEM interface can only support a finite number of connections when using HSMS-SS. 
  • Because multiple connections are not addressed explicitly in the standard, there are not requirements for handling them. For example, GEM requires that operator commands and operator recipe management activity be reported to the host. However, when another connection sends a remote command or downloads a new recipe, there is no requirement to report this. Our CIMConnect product does, but there are no formal requirements to do so. 
  • GEM requires the communication status to be displayed in the GUI, but what about multiple connections? It is not clear what needs to be displayed for multiple hosts. Typically I’ve just displayed the first GEM connection status, but it might be useful to show each connection status and give the operator a chance to control all GEM connections. 
  • Some collection events (and hence data variables), status variables and equipment constants are targeting the behavior of that single connection. This means that in order to implement multiple connections correctly, these connection-specific features must be unique for that connection. For example, consider status variables EventsEnabled and ControlState. The values reported for these two status variables are unique to that connection. This adds some complexity to implementing the GEM interface with multiple connections. Of course, our CIMConnect product implements and handles this already. 

Does each GEM connection have to be identical? 

No, but generally speaking it should be the same. The same set of collection events/data variables, alarms, status variables, and equipment constants should be reported to all connections. However, there are use cases where it might be useful to have some unique collection events and data on one connection. For example, if an equipment supplier uses one GEM connection as a pipeline for a factory host package dedicated to their equipment, they might want to publish some unique data that is for its eyes only. As mentioned above, if two GEM host connection create an identical report, and link it to the same collection event, then both should receive identical data. On the other hand, trace data reports with the same status variables may not need to report identical data, because the values might be sampled independently and at different time intervals. 

How many GEM connections should an equipment support in its GEM interface?

I recommend supporting five connections. Most GEM implementations are just using one connection today, so this opens the door for up to four more connections. This enables an equipment to handle most situations without the need to be reconfigured later at the factory. In CIMConnect, the overhead for having five connections is quite minimal, and virtually nothing if they are not used. 

What should the communication settings be? 

You should definitely set up the equipment as passive. This puts all of the configuration on the host side. The device ID can be the same for all connections, where 0, 1, or 32767 is best. 

How do I turn on multiple GEM connections in CIMConnect?

Since our CIMConnect product inherently supports multiple GEM connections, Cimetrix customers really only have to configure the setup file. Our CIMConnect GEM product was originally designed with multiple GEM connections in mind; therefore it is native and intuitive, with virtually no extra programming required unless you count the additional work in the operator interface. In the setup file, just create the five [CONNECTIONX] sections initially, and then set up a connection-specific VARIABLES and EVENTS section for each of the five connections. 

Alternative Approaches?

One alternative approach is to look at the SEMI Equipment Data Acquisition (EDA) standards. An EDA interface is inherently only for data collection and has multiple client access built into the standard as a fundamental requirement. The semiconductor front end device manufacturers have successful embraced this technology in addition to the GEM standard. The GEM interface is used by the Manufacturing Execution System for command and control of the equipment, while the EDA interface is used for every other application. 

Final Thoughts

My recommendation is that everyone, especially Cimetrix CIMConnect customers, take a look at their GEM interface and make sure that you are doing a good job implementing multiple host connections. CIMConnect makes this extremely easy. And let your customers know that you have this feature so that they can take advantage of it. 

You can learn more about the GEM standard any time on our website.

GEM Standard

Topics: Industry Highlights, SECS/GEM, Smart Manufacturing/Industry 4.0, Cimetrix Products