The SR10000 series recorders produced by Yokogawa Electric Corporation are high-performance, multi-channel data recording devices widely used in industrial process control, laboratory monitoring, and other fields. This guide aims to systematically organize the official manual, extract key operations, and help users quickly master and effectively apply the recorder.
Chapter 1: Device Overview and Core Concepts
1.1 Models and Basic Parameters
Model Classification: Pen-type (SR10001 – SR10004) and dot-matrix type (SR10006).
Measurement Cycle: The pen-type has a fixed measurement cycle of 125 ms, while the dot-matrix type depends on the A/D integration time.
Input Channels: Correspond to the number of pens or dots in the model. Unused channels can be set to “Skip”.
1.2 Two Operating Modes
Setting Mode: Press and hold the MENU key for 3 seconds to enter and set daily parameters.
Basic Setting Mode: In the setting mode, press and hold the △ and ▽ keys simultaneously for 3 seconds to enter for in-depth system configuration.
Important Note: The basic setting mode cannot be accessed during recording.
1.3 Core Concepts
Range Type: Such as thermocouple type K, DC voltage 2V, etc., with fixed measurable ranges.
Input Range: Specify the actual measurement range within the measurable range.
Recording Range: On the recording paper, a width of 100 mm represents 0% to 100% of the input range.
Scale Calculation: Linearly convert voltage signals into actual physical units.
Chapter 2: Detailed Explanation and Configuration of Measurement Input Functions
2.1 Input Type and Range Setting
Operation Path: Setting mode → RANGE, select the channel and input type, and set the range values.
2.2 Input Signal Processing Functions
Filter (Pen-type Models): A low-pass filter to smooth signals.
Moving Average (Dot-matrix Models): Calculate the average of consecutive sampled values.
A/D Converter Integration Time: Suppress power frequency interference.
2.3 Advanced Calculation and Compensation Functions
Bias: Add a fixed offset to the measured value.
Input Value Calibration (/CC1 Optional Accessory): Multi-point broken-line calibration.
Thermocouple Cold Junction Compensation: Compensate for errors caused by cold junction temperature changes.
Thermocouple/1 – 5V Open-circuit Detection: Detect signal disconnections and trigger alarms.
Chapter 3: Alarm Function Configuration and Management
3.1 Alarm Types and Setting
Operation Path: Setting mode → ALARM, select the channel and alarm number, and set the alarm type and value.
Laser particle size analyzers are widely used in fields such as materials science, powder technology, biopharmaceuticals, and mineral processing. Their measurement accuracy and repeatability are key indicators for evaluating equipment performance. The Anton Paar PSA 1090 LD, as a high-precision wet laser particle size analyzer, may encounter typical abnormalities such as “slow drainage, low flow rate, system blockage, poor measurement repeatability, and large particle size deviation” during long-term use. Based on actual fault cases of a user’s equipment, this study conducts a systematic analysis from multiple dimensions including the light path, flow path, circulation pump, dispersion cell, and drainage channel, and proposes technical cause determination methods and engineering maintenance steps. This article aims to provide a complete set of fault diagnosis methods and scientific maintenance paths for third-party laboratories, after-sales engineers, and equipment users, helping to improve instrument reliability and service life.
1. Introduction
Laser particle size analyzers play an irreplaceable role in the field of powder and particle material characterization. With the rapid development of materials science and nanotechnology, the requirements for the accuracy, stability, and repeatability of particle size testing continue to increase. The Anton Paar PSA 1090 LD, as an internationally recognized laser particle size analyzer, has core advantages such as high light path stability, good dispersion effect, and high system automation. However, even high-end equipment may still encounter typical problems such as “slow drainage, blockage, poor repeatability, and large particle size deviation” during long-term operation or improper maintenance.
Based on real-world usage cases, this article, from the perspective of third-party laboratory engineers, systematically analyzes the root causes of such faults and provides immediately implementable diagnostic methods, aiming to provide high-value references for relevant practitioners.
2. Working Principle and System Composition of the PSA 1090 LD
To understand why the equipment exhibits abnormalities, it is necessary to first understand its internal structure and operating mechanism.
2.1 Introduction to the Wet Dispersion System
The PSA 1090 LD uses a wet dispersion method, where the liquid is driven by a circulation pump to form a continuous flow between the sample cell and the water tank. The water flow undertakes three tasks:
Transporting sample particles
Ensuring uniform dispersion of particles
Providing a stable light path environment
The stability of the flow rate determines whether the sample can uniformly pass through the light beam and whether the measurement can be precise.
2.2 Structure of the Light Path System
The laser is emitted from the transmitting end, passes through the sample in the sample cell, and the scattered light is collected by the detector. If the light path is affected, it will lead to significant data deviations.
Light path window contamination may cause:
Unstable scattered light intensity
Increased data noise
Abnormal oscillation of the particle size curve
This is an important factor contributing to measurement deviations.
2.3 Importance of the Circulation System and Fluid Dynamics
The circulation system consists of:
Suction hose
Circulation pump
Flow cell (sample cell)
Drainage channel
An increase in resistance at any position will lead to:
Decreased water flow
Inability to discharge bubbles
Accumulation of particles in the cell
Unstable test curves
Actual cases show that fluid dynamic problems are the main source of abnormalities in the PSA series.
3. Fault Manifestations and Initial Symptoms
According to feedback from the user’s site and video footage, the equipment exhibited typical system fault characteristics.
3.1 Slow Drainage and Insufficient Flow Rate
This is the most intuitive abnormal phenomenon. A normal device should be able to complete drainage quickly, but in this case:
The drainage speed is significantly reduced
The water flow is interrupted or intermittent
There is a noticeable sense of resistance
This indicates partial blockage within the circulation system.
3.2 Particle Deposition and Flocculation in the Sample Cell
From the photos of the sample cell window, it can be seen that:
There is a large amount of sediment at the bottom
There are flocculent impurities
The light path channel is not clean
This directly affects measurement accuracy.
3.3 Huge Deviations in Multiple Measurement Results
For example:
D50 changes from 0.8 µm to 58 µm (a jump of 70 times)
The shapes of the three curves are completely different
This phenomenon is definitely not due to sample problems but rather:
Uneven flow rate
Incomplete dispersion of aggregates
Laser signal fluctuations
These cause systematic deviations.
3.4 Bubble Retention and Discontinuous Fluid Flow
The video shows the presence of:
A large number of bubbles in the liquid
Interruptions and jumps in the liquid flow
Inability of the water body to continuously flow through the sample cell
This directly leads to a sharp increase in optical signal noise.
4. Systematic Analysis of Fault Causes
Based on the fault manifestations, the main abnormal sources involved in this case are as follows.
4.1 Blockage in the Dispersion Cell and Flow Cell
The bottom of the sample cell and the drainage outlet are the most prone to blockage. Long-term accumulation of:
Microparticles
Scale
Sediment
Organic film
will narrow the fluid channel.
Results:
Insufficient flow rate
Discontinuous signals
Jittering of the particle size curve
4.2 Blockage in the Drainage Channel (Core Cause in This Case)
The drainage channel is narrow, and even a small amount of sediment can significantly affect the flow rate. In this case, the obvious slowdown in drainage indicates severe blockage in the channel.
4.3 Insufficient Suction or Excessive Load of the Circulation Pump
The circulation pump is not damaged but rather:
The resistance in the pathway has increased
It is difficult to form sufficient flow
The pump idles, is sluggish, or has fluctuating water output
This leads to abnormalities in the entire system.
4.4 Aging of the Water Inlet Hose and Formation of Biofilm
The hose in this case has shown:
Yellowing
Rough inner walls
Increased flow resistance
Biofilm or sediment reduces the water absorption efficiency.
4.5 Light Path Window Contamination and Optical Signal Attenuation
Deposits on the window will:
Change the incident light intensity
Cause abnormal scattering
Trigger abnormal peaks in particle size
Deform the distribution curve
This is significantly present in this case.
4.6 Software Parameter Factors
Although parameters such as refractive index and dispersion mode can also affect the results, they will not cause mechanical problems such as “slow drainage” and can be excluded.
5. Engineering Diagnostic Steps
The following diagnostic process can be used by third-party laboratories to judge the performance of the PSA series wet systems.
5.1 Flow Observation Method
Normal: Continuous flow Abnormal: Flow interruption, slowness, repeated appearance of bubbles In this case, the flow rate is severely insufficient.
5.2 Blank Baseline Stability Judgment
A stable signal during blank testing indicates a normal light path; fluctuations suggest light path or fluid abnormalities. In this case, the baseline noise is significantly increased.
5.3 Evaluation of Ultrasonic Dispersion Effectiveness
If particles still aggregate after ultrasonic activation, it indicates:
Insufficient flow rate
Inability to carry away aggregates
rather than a fault in the ultrasonic device itself.
5.4 Inspection of the Optical Window of the Sample Cell
The presence of:
Mildew spots
Scale
Contamination points
may lead to unstable data.
5.5 Drainage Speed Test
The slower the drainage speed, the more it indicates:
Blockage in the flow channel
Adherents on the pipe walls
Excessive system resistance
In this case, the drainage speed has significantly decreased.
5.6 Judgment of Circulation Pump Performance
If the pump can operate normally but the flow rate is insufficient, it is mostly due to excessive resistance, and the pump may not necessarily be damaged.
6. System Maintenance and Recovery Plan (Engineer Level)
The following are the most effective maintenance steps for the PSA series.
6.1 Cleaning the Flow Path: Circulation with 1% NaOH Solution
Steps:
Add 1% NaOH solution to the water tank
Operate at the maximum flow rate for 10–15 minutes
Then rinse with a large amount of pure water for 10 minutes
If there is an ultrasonic function, activate it for collaborative cleaning
Functions:
Dissolve sediment
Remove biofilm
Clean the flow channel
6.2 Reverse Flushing of the Sample Cell (Key Step)
Using a 50–100 mL syringe:
Unplug the drainage hose
Aim the syringe at the drainage outlet
Inject water backward into the sample cell
It is normal to flush out black or yellow sediment. This is the most effective unclogging method for the PSA series.
6.3 Replacement of the Water Inlet Hose and Drainage Pipe
Aging hoses cause poor water absorption. In this case, the pipes are obviously aged and need to be completely replaced with new ones.
6.4 Cleaning Method for the Light Path Window
Use:
70–99% IPA
Fiber-free cotton swabs
Gently wipe the contaminated areas and avoid scratching with hard objects.
6.5 Standard Process for Eliminating Bubbles
Operate at the maximum circulation
Tilt the instrument by 20–30 degrees
Discharge the liquid multiple times
Continuously observe the changes in bubbles inside the sample cell
6.6 Final Calibration and Repeatability Verification
Test:
Three repeatability curves
Stability of D10, D50, and D90
Baseline noise level
After recovery, the curves should have a high degree of overlap.
7. Case Study: Correspondence between Abnormal Data and Real Causes
In this case, typical “data distortion caused by unstable system flow rate” is observed.
7.1 Abnormal Shoulder Peaks in the Particle Size Distribution Curve
Shoulder peaks indicate that the particles are not uniformly dispersed, which is a false peak caused by unstable flow.
7.2 Direct Correlation between D50 Jumps and Flow Rate Problems
Insufficient flow rate will lead to:
Deposition of large particles, resulting in false large particle peaks
Uneven concentration, causing jumps
This is completely consistent with this case.
7.3 Reasons for Different Shapes of Three Measurement Curves
Interruption of water flow
Bubbles passing through the light path
Fluctuations in sample concentration
Not due to the sample itself.
8. Preventive Maintenance Strategies and Recommendations
To prevent similar faults from occurring again, the following maintenance system should be established:
8.1 Lifespan Management of Pipelines
It is recommended to replace hoses every 6–12 months.
8.2 Flow Path Cleaning Plan
Recommendations:
Clean with pure water once a week
Perform NaOH circulation once a month
Conduct reverse flushing once a quarter
8.3 Light Path Maintenance Cycle
Check the light path window every 1–2 months and immediately remove any scale if present.
8.4 Water Quality and Environment
Must use:
Deionized water (electrical conductivity < 10 μS/cm)
Clean sample cups
Avoid dust entering the water tank
9. Conclusion
This case fully demonstrates that when the Anton Paar PSA 1090 LD exhibits faults such as “slow drainage, blockage, and large particle size deviation,” the root causes are mostly a combination of fluid dynamic abnormalities, light path contamination, and aging pipelines. Through systematic diagnosis and engineering maintenance, the equipment performance can be fully restored.
Key insights include:
The flow rate is the primary factor affecting the measurement accuracy of wet methods
The drainage channel and sample cell are the most important cleaning points
Light path window contamination can sharply reduce measurement repeatability
Pipeline aging can lead to potential resistance problems
Ultrasonication and flow rate must work in tandem to ensure sufficient dispersion
For third-party laboratories and engineers, establishing standardized maintenance procedures is a necessary measure to ensure the long-term stable operation of instruments.
— A Focus on “END” Faults and TRIP Light Illumination
Table of Contents
Introduction
Fundamentals of Inverters 2.1 How Inverters Work 2.2 Technical Specifications of Anruiji E6 Series Inverters 2.3 Core Functions and Applications
Basic Fault Diagnosis Process 3.1 Classification of Fault Phenomena 3.2 Steps for Fault Diagnosis
In-Depth Analysis of “END” Faults and TRIP Light Illumination 4.1 Definition and Manifestation of Faults 4.2 Possible Causes of Faults 4.3 Viewing and Interpreting Fault Codes
Common Fault Types and Solutions 5.1 Overcurrent Faults (OC1/OC2/OC3) 5.2 Overload Faults (OL1/OL2) 5.3 Phase Loss Faults (SP1/SP0) 5.4 Overvoltage/Undervoltage Faults (OV1/OV2/UV) 5.5 Motor Parameter Autotuning Faults (TE) 5.6 External Faults (EF)
Principles and Troubleshooting of Motor Parameter Autotuning 6.1 Purpose and Process of Autotuning 6.2 Causes and Solutions for Autotuning Failures
Maintenance and Upkeep of Inverters 7.1 Daily Maintenance Checklist 7.2 Periodic Maintenance Procedures 7.3 Replacement of Wear-Prone Components
Advanced Fault Diagnosis Techniques 8.1 Using Oscilloscopes for Signal Analysis 8.2 Diagnosing Issues via Analog Inputs and Outputs 8.3 Remote Monitoring through Communication Functions
Case Studies 9.1 Case Study 1: “END” Fault Due to Failed Motor Parameter Autotuning 9.2 Case Study 2: TRIP Light Illumination Caused by Overcurrent 9.3 Case Study 3: Inverter Shutdown Due to Input Phase Loss
Preventive Measures and Best Practices 10.1 Avoiding Common Faults 10.2 Best Practices for Parameter Settings 10.3 Environmental Factors Affecting Inverters
Conclusion
1. Introduction
Inverters are pivotal components in modern industrial automation systems, widely used for motor control, energy conservation, and precise speed regulation. The Anruiji E6 series inverters are renowned for their high performance, reliability, and extensive functionality. However, inverters can encounter various faults during operation, such as the “END” fault and TRIP light illumination, which can disrupt production and potentially damage equipment.
This article focuses on the Anruiji E6 series inverters, providing an in-depth analysis of the causes, diagnostic methods, and solutions for “END” faults and TRIP light illumination. Combined with practical case studies, this guide offers a systematic approach to troubleshooting and maintenance, helping engineers and technicians quickly identify and resolve issues to restore production efficiency.
2. Fundamentals of Inverters
2.1 How Inverters Work
Inverters adjust the frequency and voltage of the input power supply to achieve precise control of AC motors. Key components include:
Rectifier Unit: Converts AC power to DC power.
Filter Unit: Smooths the DC voltage.
Inverter Unit: Converts DC power back to adjustable frequency and voltage AC power.
Control Unit: Adjusts output frequency and voltage based on set parameters and feedback signals.
2.2 Technical Specifications of Anruiji E6 Series Inverters
The Anruiji E6 series inverters feature:
Input/Output Characteristics:
Input Voltage Range: 380V/220V ±15%
Output Frequency Range: 0~600Hz
Overload Capacity: 150% rated current for 60s, 180% rated current for 10s
Control Modes:
Sensorless Vector Control (SVC)
V/F Control
Torque Control
Functional Features:
PID Control, Multi-Speed Control, Swing Frequency Control
Instantaneous Power Loss Ride-Through, Speed Tracking Restart
25 types of fault protection functions
2.3 Core Functions and Applications
Inverters are widely used in:
Fans and Pumps: Achieving energy savings through speed regulation.
Machine Tools and Injection Molding Machines: Precise control of speed and torque.
Cranes and Elevators: Smooth start/stop operations to reduce mechanical stress.
Textile and Fiber Industries: Swing frequency control for uniform winding.
3. Basic Fault Diagnosis Process
3.1 Classification of Fault Phenomena
Inverter faults can be categorized as:
Hardware Faults: Such as IGBT damage, capacitor aging, and loose connections.
Parameter Faults: Incorrect parameter settings or failed autotuning.
Environmental Faults: Overheating, high humidity, and electromagnetic interference.
Load Faults: Motor stalling, excessive load, or mechanical jamming.
3.2 Steps for Fault Diagnosis
Observe Fault Phenomena: Note display messages and indicator light statuses.
Check Fault Codes: Retrieve specific fault codes via the panel or communication software.
Analyze Possible Causes: Refer to the manual to list potential causes based on fault codes.
Systematic Troubleshooting: Start with simple checks and progress to more complex issues.
Verification and Repair: After fixing the fault, restart the inverter to verify the solution.
4. In-Depth Analysis of “END” Faults and TRIP Light Illumination
4.1 Definition and Manifestation of Faults
“END” Display: Typically appears after motor parameter autotuning or parameter setting completion. If accompanied by the TRIP light, it indicates a fault during autotuning or operation.
TRIP Light Illumination: Indicates that the inverter has triggered a fault protection and stopped output.
4.2 Possible Causes of Faults
Failed Motor Parameter Autotuning:
Motor not disconnected from the load (autotuning requires no load).
Incorrect motor nameplate parameters (F2.01~F2.05).
Inappropriate acceleration/deceleration times (F0.09, F0.10) causing overcurrent.
Overcurrent Faults:
Motor stalling or excessive load.
Unstable input voltage (undervoltage or overvoltage).
Mismatch between inverter power and motor power.
Overload Faults:
Motor operating under high load for extended periods.
Overload protection parameter (Fb.01) set too low.
Input/Output Phase Loss:
Loose connections in input (R, S, T) or output (U, V, W).
Overvoltage/Undervoltage:
Significant input voltage fluctuations.
Short deceleration time causing energy feedback and bus overvoltage.
4.3 Viewing and Interpreting Fault Codes
Press PRG/ESC or DATA/ENT to view specific fault codes (e.g., OC1, OL1, TE).
Refer to the “Fault Information and Troubleshooting” section in the manual to find solutions based on fault codes.
5. Common Fault Types and Solutions
5.1 Overcurrent Faults (OC1/OC2/OC3)
Causes:
Acceleration time too short (F0.09).
Motor stalling or excessive load.
Low input voltage.
Solutions:
Increase acceleration time (F0.09).
Check motor and load for mechanical jamming.
Verify input voltage stability.
5.2 Overload Faults (OL1/OL2)
Causes:
Motor operating under high load for extended periods.
Overload protection parameter (Fb.01) set too low.
Ensure no short circuits or open circuits in power source or motor wiring.
5.4 Overvoltage/Undervoltage Faults (OV1/OV2/UV)
Causes:
Significant input voltage fluctuations.
Short deceleration time causing energy feedback and bus overvoltage.
Solutions:
Increase deceleration time (F0.10).
Install braking resistors or units.
Check input voltage stability.
5.5 Motor Parameter Autotuning Faults (TE)
Causes:
Incorrect motor parameters.
Motor not disconnected from the load.
Autotuning timeout.
Solutions:
Re-enter motor nameplate parameters (F2.01~F2.05).
Ensure motor is unloaded.
Set appropriate acceleration/deceleration times (F0.09, F0.10).
5.6 External Faults (EF)
Causes:
External fault input terminal activation.
Communication faults (CE).
Solutions:
Check external fault input signals.
Verify communication lines and baud rate settings.
6. Principles and Troubleshooting of Motor Parameter Autotuning
6.1 Purpose and Process of Autotuning
Motor parameter autotuning aims to obtain precise motor parameters (e.g., stator resistance, rotor resistance, inductance) to enhance control accuracy. The process includes:
Set F0.13=1 (Full Autotuning).
Press RUN to start autotuning.
The inverter drives the motor and calculates parameters.
Upon completion, parameters are automatically updated to F2.06~F2.10.
6.2 Causes and Solutions for Autotuning Failures
Cause
Solution
Motor not unloaded
Ensure motor is disconnected from load
Incorrect parameters
Re-enter motor nameplate parameters (F2.01~F2.05)
Short acceleration/deceleration times
Increase F0.09, F0.10
Incorrect motor wiring
Check U, V, W connections
Unstable power supply
Verify input voltage
7. Maintenance and Upkeep of Inverters
7.1 Daily Maintenance Checklist
Check environmental temperature and humidity.
Ensure fan operates normally.
Verify input voltage and frequency stability.
7.2 Periodic Maintenance Procedures
Check Item
Check Content
Action
External Terminals
Loose screws
Tighten
PCB Board
Dust, debris
Clean with dry compressed air
Fan
Abnormal noise, vibration
Clean or replace
Electrolytic Capacitors
Discoloration, odor
Replace
7.3 Replacement of Wear-Prone Components
Fans: Replace after 20,000 hours of use.
Electrolytic Capacitors: Replace after 30,000 to 40,000 hours of use.
8. Advanced Fault Diagnosis Techniques
8.1 Using Oscilloscopes for Signal Analysis
Check input/output voltage waveforms for distortions or phase loss.
Analyze analog input/output signals for interference.
8.2 Diagnosing Issues via Analog Inputs and Outputs
Verify A11, A12 inputs are normal.
Check AO1, AO2 outputs match settings.
8.3 Remote Monitoring through Communication Functions
Use Modbus communication to read real-time inverter data.
Remotely adjust parameters to avoid on-site operation risks.
9. Case Studies
9.1 Case Study 1: “END” Fault Due to Failed Motor Parameter Autotuning
Phenomenon: Inverter displays “END”, TRIP light illuminated. Cause: Motor not disconnected from load, autotuning timeout. Solution:
Disconnect motor from load.
Re-enter motor parameters (F2.01~F2.05).
Restart autotuning (F0.13=1).
9.2 Case Study 2: TRIP Light Illumination Caused by Overcurrent
Phenomenon: Inverter shuts down during operation, displays OC1. Cause: Acceleration time too short, motor stalling. Solution:
Increase acceleration time (F0.09=20s).
Check motor load for jamming.
9.3 Case Study 3: Inverter Shutdown Due to Input Phase Loss
Phenomenon: Inverter fails to start, displays SP1. Cause: Input power source R phase loss. Solution:
Check input connections, ensure R, S, T are connected.
Restart inverter, fault cleared.
10. Preventive Measures and Best Practices
10.1 Avoiding Common Faults
Regularly check connections and environment.
Set reasonable acceleration/deceleration times and overload protection parameters.
Avoid frequent starts/stops to reduce mechanical stress.
10.2 Best Practices for Parameter Settings
Accurately set motor parameters (F2.01~F2.05) based on nameplate.
Optimize carrier frequency (F0.12) to balance noise and efficiency.
Enable AVR function (F0.15) to improve voltage stability.
10.3 Environmental Factors Affecting Inverters
Avoid high temperature, humidity, and dusty environments.
Ensure good ventilation to prevent overheating.
11. Conclusion
The “END” fault and TRIP light illumination in Anruiji E6 series inverters are typically caused by failed motor parameter autotuning, overcurrent, overload, phase loss, and other issues. Through a systematic fault diagnosis process, combined with fault codes and practical case studies, issues can be quickly identified and resolved. Regular maintenance and proper parameter settings are crucial for ensuring the long-term stable operation of inverters. Engineers should be familiar with the working principles and fault characteristics of inverters to enhance the efficiency and accuracy of troubleshooting.
The ABB EL3020 gas analyzer is widely used in industrial flue gas monitoring, combustion optimization, and emission control systems. Known for its accuracy and stability, it is often configured with O₂ sensors and Uras26 infrared modules to measure multiple gas components. However, during long-term operation, users may encounter the following warning:
30402 – Sensor:02 – Ampl. half The amplification drift exceeds the HALF value of the permissible range.
This is a typical amplifier drift alarm, indicating that the signal amplification circuit or the sensor itself is drifting beyond the acceptable range. If not addressed promptly, it can degrade measurement accuracy or cause system lockout. This article provides a comprehensive, technically detailed explanation and solution strategy, including principle analysis, fault causes, diagnostic procedures, corrective actions, and preventive maintenance.
2. System Architecture and Signal Amplification Principle
2.1 System Components
An EL3020 analyzer typically consists of:
Main Control Unit: Handles signal acquisition, amplification, computation, and display.
Sensor Unit: Includes O₂ electrochemical or paramagnetic sensors.
Amplifier and Signal Conditioning Board: Amplifies microvolt/millivolt signals to standard voltage levels.
Power Supply Module: Provides stable ±15V and +5V power.
Communication and Display Interface: Connects to DCS/PLC systems.
2.2 Amplification Mechanism
The O₂ sensor outputs a very weak signal (in microvolts or millivolts). The EL3020 uses precision instrumentation amplifiers (e.g., AD620 or OPA227 series) for multiple-stage amplification and temperature compensation. During startup, the system records a zero reference signal and continuously monitors the amplifier gain. If the gain drift exceeds half of the permissible range, it triggers the “Ampl. half” alarm.
3. Meaning and Logic of Alarm Code 30402
3.1 Definition
Alarm Code
Description
Severity
Recommended Action
30402 – Sensor:02 Ampl. half
Amplifier drift exceeds half of the permissible range for Sensor 02
Warning (non-fatal)
Inspect sensor, recalibrate, or replace amplifier board
3.2 Trigger Logic
The internal diagnostic continuously compares:
Current amplification factor (A_meas)
Reference amplification factor at calibration (A_ref)
Maximum permissible drift (ΔA_max)
If the condition below is met: [ |A_{meas} – A_{ref}| > 0.5 \times \Delta A_{max} ] then the “Ampl. half” warning is triggered. If it further exceeds 100%, the system raises a “Ampl. full” error, freezing measurement output.
4. Root Cause Analysis
Based on field experience, the “Ampl. half” alarm on ABB EL3020 usually results from one or more of the following issues:
4.1 Sensor Aging or Contamination
Electrode degradation in electrochemical/paramagnetic O₂ sensors after prolonged use.
Gas contamination (SO₂, particulates) or membrane aging causing unstable output.
4.2 Amplifier Drift or Component Aging
Operating in high-temperature environments (>45°C) causes thermal drift in operational amplifiers, resistors, or capacitors.
Electrolytic capacitors degrade over time, shifting the amplifier’s DC bias.
4.3 Power Supply or Grounding Faults
Excessive power ripple (>50 mV) on ±15V supply.
Grounding resistance too high, introducing common-mode noise.
Aging voltage regulators (7815/7915).
4.4 Calibration Data Deviation
Outdated zero/span calibration values cause A_ref deviation.
EEPROM corruption or unexpected software reset.
4.5 Environmental and Gas Conditions
High humidity (>80% RH) causes condensation inside electronics.
Acidic or wet sample gas damages sensor stability.
5. Step-by-Step Troubleshooting Procedure
Step 1: Confirm Alarm Status
Navigate to Status → Messages → 30402 Sensor:02.
If both “Ampl. half” and “Ampl. full” appear → Stop measurement immediately.
If only “Ampl. half” → Continue monitoring while preparing for maintenance.
Step 2: Check Signal Trends
Go to Service → Sensor Diagnostics → Amplifier Value.
Observe drift tendency; continuous or increasing drift indicates amplifier instability.
Step 3: Measure Amplifier Output
Disconnect the sensor input and measure amplifier output voltage.
If voltage drifts >5 mV/min, amplifier board is defective.
Step 4: Recalibrate Analyzer
Perform Zero Calibration (use pure N₂ or zero gas).
Perform Span Calibration (use standard 8% O₂/N₂ calibration gas).
Restart analyzer and confirm if alarm disappears.
Step 5: Check Power Supply and Grounding
Verify ±15V voltage ripple with an oscilloscope (<30 mV ideal).
Ensure grounding resistance <1 Ω.
Add ferrite cores or RC filters on signal lines if noise persists.
Step 6: Replace Defective Components
If alarm persists:
Replace the O₂ sensor module.
If no improvement, replace the amplifier board or main control unit.
6. Case Study
Background
A chemical plant used ABB EL3020 for O₂ and SO₂ monitoring in boiler exhaust. After three years, “30402 Ampl. half” warnings became frequent.
On-Site Diagnosis
O₂ sensor output showed unstable fluctuations.
Amplifier IC temperature reached 52°C.
Power supply ripple measured 85 mV (excessive).
Actions Taken
Replaced aged capacitors on the power board.
Recalibrated O₂ zero and span points.
Installed cooling fan near amplifier section.
Cleaned sensor chamber from dust and moisture.
Result
System stabilized; amplifier drift returned to normal. No alarms after six months of operation.
7. Preventive Maintenance Recommendations
Task
Interval
Description
Zero/Span Calibration
Every 3 months
Use certified calibration gases
Sensor Cleaning
Every 6 months
Remove dust and moisture; inspect O-rings
Power Check
Every 6 months
Verify ±15V ripple <30 mV
Cooling Inspection
Annually
Clean air ducts and ensure adequate ventilation
Amplifier Verification
Every 2 years
Test amplifier stability; replace if necessary
Additional recommendations:
Record Ampl drift trend logs regularly.
Backup configuration files via ELCom/RS232 interface.
Avoid prolonged operation in humid or dusty environments.
8. Technical Summary
Alarm Nature: Amplifier drift beyond calibration threshold, reflecting instability in the signal chain.
Root Causes: Sensor aging, power instability, amplifier temperature drift, or calibration loss.
“Ampl. full” demands immediate shutdown and inspection.
9. Conclusion
The “Amplification drift exceeds half range” warning may appear minor, but it signals a deeper issue in signal stability, thermal management, and calibration integrity within ABB EL3020 analyzers. For high-precision instruments like these, preventive maintenance is far more effective than corrective repair. By implementing systematic calibration, routine inspections, and component lifecycle management, operators can ensure long-term accuracy, reliability, and compliance with environmental standards.
Ultimately, maintaining signal stability is not only about the analyzer’s performance—it safeguards the entire process control chain that depends on its data.
X-ray fluorescence (XRF) spectroscopy technology is widely applied in geological exploration and mineral analysis due to its advantages of rapidness, non-destructiveness, and simultaneous multi-element determination. Handheld XRF analyzers are particularly crucial for on-site testing of iron ores, enabling quick determination of ore grades, on-site screening of element contents, and monitoring of mining production processes. However, the test results from handheld XRF do not always align with laboratory chemical analyses, with deviations often stemming from improper sample preparation or inaccurate calibration. Therefore, a thorough understanding of the instrument’s calibration methods and analytical conditions is essential to avoid reporting erroneous results.
Overview of the Principles and Calibration Mechanisms of Handheld XRF Analyzers
Handheld XRF analyzers operate based on the X-ray fluorescence effect: an X-ray tube emits primary X-rays to irradiate the sample, exciting characteristic X-rays (fluorescent rays) from the elements in the sample. The detector receives and measures the energy and intensity of these characteristic X-rays, and the software identifies the element types based on the characteristic energy peaks of different elements and calculates the element contents according to the peak intensities. Handheld XRF uses energy-dispersive spectroscopy analysis, acquiring signals from elements ranging from magnesium (Mg) to uranium (U) through a built-in silicon drift detector (SDD), enabling simultaneous analysis of major and minor elements in iron ores, such as iron, silicon, aluminum, phosphorus, and sulfur.
To convert the detected X-ray intensities into accurate element contents, XRF analyzers need to establish a calibration model. Most handheld XRF analyzers come pre-calibrated by the manufacturer, combining the fundamental parameters method and empirical calibration. The fundamental parameters method (FP) uses physical models of X-ray interactions with matter for calibration, allowing simultaneous correction of geometric, absorption, and secondary fluorescence effects over a wide range of unknown sample compositions. The empirical calibration method establishes an empirical calibration curve by measuring a series of known standard samples for quantitative analysis of specific types of samples. Handheld XRF also generally incorporates an energy calibration mechanism to align the spectral channels and ensure stable identification of element peak positions.
Error Issues Based on Calibration Using 310 Stainless Steel
In practical applications, some operators may calibrate handheld XRF using metal standards (e.g., 310 stainless steel) and then directly apply it to the compositional analysis of iron ores. However, this approach can introduce significant systematic errors due to the mismatch between the calibration standard and the sample matrix. 310 stainless steel is a high-alloy metal, differing greatly from iron ores (which are oxide-based non-metallic mineral matrices) in terms of physical properties and matrix composition.
Matrix effects are the primary cause of these errors. When the calibration reference of XRF differs from the actual sample matrix, it can lead to changes in the absorption or enhancement of the X-ray signals of the elements to be measured, causing deviations from the calibration curve. For example, when an instrument calibrated with 310 stainless steel is used to measure iron ores, since stainless steel contains almost no oxygen and has a high-density metal matrix, the excitation and absorption conditions of the Fe fluorescence signal in this matrix are entirely different from those in iron ores, causing the instrument to tend to overestimate the iron content.
In addition to matrix absorption differences, systematic errors can also arise from inappropriate calibration modes, linear shifts caused by single-point calibration, differences in geometry and surface conditions, and other factors. The combination of these factors can result in significant errors and biases in the results of iron ore measurements calibrated with 310 stainless steel.
Calibration Modes of XRF Analyzers and Their Impact on Results
Handheld XRF analyzers typically come pre-programmed with multiple calibration/analysis modes to accommodate the testing needs of different types of materials. Common modes include alloy mode, ore/geological mode, and soil mode. Improper mode selection can significantly affect the test results.
Alloy Mode: Generally used for analyzing the composition of metal alloys, assuming the sample is a high-density pure metal matrix. Using alloy mode to measure iron ores can lead to deviations and anomalies in the results because ores contain a large amount of oxygen and non-metallic elements.
Soil Mode: Mainly used for analyzing environmental soils or sediments, employing Compton scattering internal standard correction methods. It is suitable for measuring trace elements in light-element-dominated matrices. For iron ores, if only impurity elements are of concern, soil mode can provide good sensitivity, but problems may arise when the major element contents are high.
Ore/Mining (Geological) Mode: Specifically designed for mineral and geological samples, often using the fundamental parameters method (FP) combined with the manufacturer’s empirical calibration. It can simultaneously determine major and minor elements. For iron ores, which have complex compositions and a wide range of element contents, ore mode is the most suitable choice.
Principles and Examples of Errors Caused by Matrix Inconsistency
When the matrix of the standard material used for calibration differs from that of the actual iron ore sample to be measured, matrix effect errors can occur in XRF quantitative analysis. Matrix effects include absorption effects and enhancement effects, that is, the influence of other elements or matrix components in the sample on the fluorescence intensity of the target element.
For example, if a calibration curve for iron content is established using pure iron or stainless steel as standards and then used to measure iron ore samples mainly composed of hematite (Fe₂O₃), the metal matrix has strong absorption of Fe Kα fluorescence, while in the ore sample, Fe atoms are surrounded by oxygen and silicon and other light elements, which have weaker absorption of Fe Kα rays. Therefore, the Fe peak intensity produced by the ore sample is higher than that in the metal matrix. However, the instrument’s calibration curve is based on metal standards and still converts the content according to the metal matrix relationship, thus interpreting the stronger signal in the ore as a higher Fe content, leading to a systematic overestimation of Fe.
Calibration Optimization Methods for Iron Ore Testing
For iron ore samples, adopting the correct calibration strategy can significantly reduce errors and improve testing accuracy. The following calibration optimization methods are recommended:
Calibration Using Ore Standard Materials: Use iron ore standard materials to establish or correct the instrument’s calibration curve to minimize systematic errors caused by matrix mismatch.
Multi-Point Calibration Covering the Concentration Range: Perform multi-point calibration covering the entire concentration range instead of using only a single point for calibration. Use at least 3-5 standard samples with different compositions and grades to establish an intensity-content calibration curve for each element.
Correct Selection of Analysis Mode: Select the ore/mining mode for analyzing iron ore samples and avoid using alloy mode or soil mode.
Application of Compton Scattering Correction: Use the Compton scattering peak as an internal standard to correct for matrix effects and compensate for overall scattering differences between samples due to differences in matrix composition and density.
Regular Calibration and Quality Control: Establish a daily calibration and quality control procedure for handheld XRF. After each startup or change in the measurement environment, use stable standard samples for testing to check if the instrument readings are within the acceptable range.
Other Factors Affecting XRF Testing of Iron Ores
In addition to the instrument calibration mode and matrix effects, the XRF testing results of iron ores are also influenced by factors such as sample particle size and uniformity, surface flatness and thickness, moisture content, probe contact method, measurement time and number of measurements, and environmental and instrument status. To obtain accurate and consistent measured values, these factors need to be comprehensively controlled:
Sample Particle Size and Uniformity: The sample should be ground to a sufficiently fine size to reduce particle size effects.
Sample Surface Flatness and Thickness: The sample surface should be as flat as possible and cover the instrument’s measurement window. The pressing method is an optimal choice for sample preparation.
Moisture Content: The sample should be dried to a constant weight before testing to avoid the influence of moisture.
Probe Contact Method: The probe should be pressed tightly against the sample surface for measurement to avoid air gaps in between.
Measurement Time and Number of Measurements: Appropriately extend the measurement time and repeat the measurements to take the average value to improve precision.
Environmental and Instrument Status: Ensure that the instrument is in good calibration and working condition and avoid the influence of extreme environments.
Precision Optimization Suggestions and Operational Specifications
To integrate the above strategies into daily iron ore XRF testing work, the following is a set of optimized operational procedures and suggestions:
Instrument Preparation and Initial Calibration: Check the instrument status and settings, ensure that the battery is fully charged, and the instrument window is clean and undamaged. Use reference standard samples with known compositions for calibration verification to confirm that the readings of major elements are accurate.
Sample Preparation: Dry the sample to a constant weight, grind it into fine powder, and mix it thoroughly. Prepare sample pellets using the pressing method to ensure density, smoothness, no cracks, and sufficient thickness.
Measurement Operation: Place the sample on a stable supporting surface, ensure that the probe is perpendicular to and pressed tightly against the sample. Set an appropriate measurement time, and measure each sample for at least 30 seconds. Repeat the measurements 2-3 times to evaluate data repeatability and calculate the average value as the final reported value.
Result Correction and Verification: Perform post-processing corrections on the data as needed, such as dry basis conversion or oxide form conversion. Compare the handheld XRF results with known reference methods for verification and establish a calibration curve for correction.
Quality Control and Record-Keeping: Strictly implement quality control measures and keep relevant records. When reporting the analysis results, note key information to facilitate result interpretation and reproduction.
Conclusion
Handheld XRF analyzers have become powerful tools for on-site testing of iron ores, but the quality of their data highly depends on correct calibration and standardized operation. This paper analyzes the errors that may arise when using metal standards for calibration, elucidates the principles of systematic deviations caused by matrix effects, and compares the impacts of different instrument calibration modes on the results. Through discussion, a series of optimized calibration strategies for iron ore samples are proposed, and the significant influences of factors such as sample preparation, probe contact, and measurement time on testing accuracy are emphasized.
Overall, proper calibration of the instrument is the foundation for ensuring testing quality. Only by doing a good job in standard material selection, mode setting, and matrix correction can handheld XRF发挥 (fully leverage) its advantages of rapidness and accuracy to provide credible data for iron ore composition analysis. Mineral analysts should attach great importance to the control of calibration errors, combine handheld XRF measurements with necessary laboratory analyses, and establish calibration correlations for specific ores to enable mutual verification and complementarity between on-site and laboratory data. Through continuous improvement of calibration methods and strict quality management, handheld XRF is expected to achieve more precise and stable measurements in iron ore testing, providing strong support for geological prospecting, ore grading, and production monitoring.
The Yokogawa AQ6370D series optical spectrum analyzer is a high-performance and multifunctional testing instrument widely used in various fields such as optical communication, laser characteristic analysis, fiber amplifier testing, and WDM system analysis. With its high wavelength accuracy, wide dynamic range, and rich analysis functions, it has become an indispensable tool in research and development as well as production environments.
This article, closely based on the content of the AQ6370D Optical Spectrum Analyzer User’s Manual, systematically introduces the device’s operating procedures, functional modules, usage tips, and precautions. It aims to help users quickly master the device’s usage methods and improve testing efficiency and data reliability.
I. Device Overview and Initial Setup
1.1 Device Structure and Interfaces
The front panel of the AQ6370D is richly laid out, including an LCD display, soft key area, function key area, data input area, optical input interface, and calibration output interface. The rear panel provides various interfaces such as GP-IB, TRIGGER IN/OUT, ANALOG OUT, ETHERNET, and USB, facilitating remote control and external triggering.
Key Interface Descriptions:
OPTICAL INPUT: This is the optical signal input interface that supports common fiber connectors such as FC/SC.
CALIBRATION OUTPUT: Only the -L1 model has this built-in reference light source output interface for wavelength calibration.
USB Interface: Supports external devices such as mice, keyboards, and USB drives for easy operation and data export.
1.2 Installation and Environmental Requirements
To ensure normal operation of the device, the installation environment should meet the following conditions:
Temperature: Maintain between 5°C and 35°C.
Humidity: Not exceed 80% RH, and no condensation should occur.
Environment: Avoid environments with vibrations, direct sunlight, excessive dust, or corrosive gases.
Space: Provide at least 20 cm of ventilation space around the device.
Note: The device weighs approximately 19 kg. When moving it, ensure two people operate it together and that the power is turned off.
II. Power-On and Initial Calibration
2.1 Power-On Procedure
Connect the power cord to the rear panel and plug it into a properly grounded three-prong socket.
Turn on the MAIN POWER switch on the rear panel. The POWER indicator on the front panel will turn orange.
Press the POWER key to start the device, which will enter the system initialization interface.
After initialization, if it is the first use or the device has been subjected to vibrations, the system will prompt for alignment adjustment and wavelength calibration.
2.2 Alignment Adjustment
Alignment adjustment aims to calibrate the optical axis of the built-in monochromator to ensure optimal optical performance.
Using Built-in Light Source (-L1 Model):
Connect the CAL OUTPUT and OPTICAL INPUT using a 9.5/125 μm single-mode fiber.
Press SYSTEM → OPTICAL ALIGNMENT → EXECUTE.
Wait approximately 2 minutes, and the device will automatically complete alignment and wavelength calibration.
Using External Light Source (-L0 Model):
Connect an external laser source (1520–1560 nm, ≥-20 dBm) to the optical input port.
Enter SYSTEM → OPTICAL ALIGNMENT → EXTERNAL LASER → EXECUTE.
2.3 Wavelength Calibration
Wavelength calibration ensures the accuracy of measurement results.
Using Built-in Light Source: Enter SYSTEM → WL CALIBRATION → BUILT-IN SOURCE → EXECUTE.
Using External Light Source: Choose EXECUTE LASER (laser type) or EXECUTE GAS CELL (gas absorption line type) and input the known wavelength value.
Note: The device should be preheated for at least 1 hour before calibration, and the wavelength error should not exceed ±5 nm (built-in) or ±0.5 nm (external).
III. Basic Measurement Operations
3.1 Auto Measurement
Suitable for quick measurements of unknown light sources:
Press SWEEP → AUTO, and the device will automatically set the center wavelength, scan width, reference level, and resolution.
The measurement range is from 840 nm to 1670 nm.
3.2 Manual Setting of Measurement Conditions
Center Wavelength/Frequency: Press the CENTER key to directly input a value or use PEAK→CENTER to set the peak as the center.
Scan Width: Press the SPAN key to set the wavelength range or use Δλ→SPAN for automatic setting.
Reference Level: Press the LEVEL key to set the vertical axis reference level, supporting PEAK→REF LEVEL for automatic setting.
Resolution: Press SETUP → RESOLUTION to choose from various resolutions ranging from 0.02 nm to 2 nm.
3.3 Trigger and Sampling Settings
Sampling Points: The range is from 101 to 50,001 points, settable via SAMPLING POINT.
Sensitivity: Supports multiple modes such as NORM/HOLD, NORM/AUTO, MID, HIGH1~3 to adapt to different power ranges.
Average Times: Can be set from 1 to 999 times to improve the signal-to-noise ratio.
IV. Waveform Display and Analysis Functions
4.1 Trace Management
The device supports 7 independent traces (A~G), each of which can be set to the following modes:
WRITE: Real-time waveform update.
FIX: Fix the current waveform.
MAX/MIN HOLD: Record the maximum/minimum values.
ROLL AVG: Perform rolling averaging.
CALCULATE: Implement mathematical operations between traces.
4.2 Zoom and Overview
The ZOOM function allows local magnification of the waveform, supporting mouse-drag selection of the area. The OVERVIEW window displays the global waveform and the current zoomed area for easy positioning.
4.3 Marker Function
Moving Marker: Displays the current wavelength and level values.
Fixed Marker: Up to 1024 can be set to display the difference from the moving marker.
Line Marker: L1/L2 are wavelength lines, and L3/L4 are level lines, used to set scan or analysis ranges.
Advanced Marker: Includes power spectral density markers, integrated power markers, etc., supporting automatic search for peaks/valleys.
4.4 Trace Math
Supports operations such as addition, subtraction, normalization, and curve fitting between traces, suitable for differential measurements, filter characteristic analysis, etc.
Common Calculation Modes:
C = A – B: Used for differential analysis.
G = NORM A: Normalize the display.
G = CRV FIT A: Perform Gaussian/Lorentzian curve fitting.
V. Advanced Measurement Functions
5.1 Pulsed Light Measurement
Supports three modes:
Peak Hold: Suitable for repetitive pulsed measurements.
Gate Sampling: Synchronized sampling with an external gate signal.
External Trigger: Suitable for non-periodic pulsed measurements.
5.2 External Trigger and Synchronization
SMPL TRIG: Wait for an external trigger for each sampling point.
SWEEP TRIG: Wait for an external trigger for each scan.
SMPL ENABLE: Perform scanning when the external signal is low.
5.3 Power Spectral Density Display
Switch to dBm/nm or mW/nm via LEVEL UNIT, suitable for normalized power display of broadband light sources (such as LEDs, ASE).
VI. Data Analysis and Template Judgement
6.1 Spectral Width Analysis
Supports four algorithms:
THRESH: Threshold method.
ENVELOPE: Envelope method.
RMS: Root mean square method.
PEAK RMS: Peak root mean square method.
6.2 Device Analysis Functions
DFB-LD SMSR: Measure the side-mode suppression ratio.
FP-LD/LED Total Power: Calculate the total optical power through integration.
WDM Analysis: Simultaneously analyze multiple channel wavelengths, levels, and OSNR.
EDFA Gain and Noise Figure: Calculate based on input/output spectra.
6.3 Template Judgement (Go/No-Go)
Upper and lower limit templates can be set for quick judgement in production lines:
Upper limit line, lower limit line, target line.
Supports automatic judgement and output of results.
VII. Data Storage and Export
7.1 Storage Media
Supports USB storage devices for saving waveform data, setting files, screen images, analysis results, etc.
7.2 Data Formats
CSV: Used to store analysis result tables.
BMP/PNG: Used to save screen images.
Internal Format: Supports subsequent import and re-analysis.
7.3 Logging Function (Data Logging)
Can periodically record WDM analysis, peak data, etc., suitable for long-term monitoring and statistical analysis.
VIII. Maintenance and Troubleshooting
8.1 Routine Maintenance
Regularly clean the fiber end faces and connectors.
Avoid direct strong light input to prevent damage to optical components.
Use the original packaging for transportation to avoid vibrations.
8.2 Common Problems and Solutions
Problem Phenomenon
Possible Causes
Solutions
Large wavelength error
Not calibrated or temperature not stable
Perform wavelength calibration and preheat for 1 hour
Inaccurate level
Fiber type mismatch
Use 9.5/125 μm SM fiber
Scan interruption
Excessive sampling points or high resolution
Adjust sampling points or resolution
USB drive not recognized
Incompatible format
Format as FAT32 and avoid partitioning
IX. Conclusion
The Yokogawa AQ6370D series optical spectrum analyzer is a comprehensive and flexible high-precision testing device. By mastering its basic operations and advanced functions, users can efficiently complete various tasks ranging from simple spectral measurements to complex system analyses. This article, based on the official user manual, systematically organizes the device’s usage procedures and key technical points, hoping to provide practical references for users and further improve testing efficiency and data reliability.
Chapter 1 Product Overview and Technical Specifications
1.1 Introduction to the Product System
The Fixturlaser NXA series laser alignment instrument is the flagship product of ACOEM AB (formerly ELOS Fixturlaser AB). Since its establishment in 1984, the company has established a complete professional service system in over 70 countries. As an industry-leading solution for shaft alignment, this system is designed based on innovative measurement technology and is widely used in various industrial equipment maintenance fields.
1.2 Core Technical Specifications
Display Unit NXA D Parameters
Two operating modes: On and Off
Dust and water resistance rating: IP65
Processor: 1GHz dual-core main processor
Memory: 256Mb, Flash storage: 8Gb
Operating temperature range: -10 to 50℃
Weight: Approximately 1.2kg (including battery)
Sensor Unit Technical Specifications
Weight: Approximately 192 grams (including battery)
Operating temperature: -10 to 50℃
Protection rating: IP65
Compliance Certifications
Complies with EMC Directive 2004/108/EC
Complies with Low Voltage Directive 2006/95/EC
Complies with RoHS Directive 2011/65/EU
Chapter 2 Analysis of Core System Components
2.1 Functional Characteristics of the Display Unit
6.5-inch touchscreen display
On/off button with status LED
Battery status check button
Built-in 256Mb memory and 8Gb flash storage
Sensor Unit Configuration
M3 and S3 sensors: Anodized aluminum frame design, high-impact ABS plastic casing, TPE rubber overmolding process
Sustainable usage for approximately 2-3 years under normal operating temperatures
Chapter 3 Safety Operation and Maintenance Procedures
3.1 Laser Safety Operation Standards
Uses laser diodes with a power output of <1.0mW
Laser classification: Class 2 safety level
Chapter 4 Core Principles of Laser Alignment Technology
4.1 Theoretical Basis of Alignment Technology
The system utilizes measurement units installed on two shafts. After rotating the shafts to different measurement positions, the system calculates the relative distances between the two shafts in two planes. It is necessary to accurately input the distances between the measurement planes, to the coupling, and to the machine feet.
4.2 System Measurement Advantages
Accuracy Advantages
6-axis MEMS inertial motion sensors provide precise data acquisition
Assess system installation environment limitations
Confirm shaft rotation feasibility
Prepare compliant shim materials
5.2 Sensor Installation Specifications
Specific Installation Steps
The sensor marked “M” is installed on the movable machine, while the sensor marked “S” is installed on the fixed machine.
Assemble the sensors on their V-block fixtures, precisely placing the fixtures on both sides of the coupling.
Hold the V-block fixtures upright and correctly install them on the shaft of the measurement object.
Lift the open end of the chain, tighten the chain to eliminate slack.
Securely tighten the chain using tension screws, and use dedicated tension tools if necessary.
Installation Accuracy Control Points
Adjust the sensor height by sliding it on the column until a clear laser line is obtained.
Lock the final position using the clamping devices on the backs of both units.
Chapter 6 Measurement Methods and Technology Selection
6.1 Rapid Mode Method
Technical Characteristics
Calculates alignment status by recording three points
Requires a minimum rotation angle of 60°
The system automatically records each measurement point
6.2 Three-Point Measurement Method
Performs alignment calculations by manually acquiring three points
All measurement points must be manually collected
6.3 Clock Method Technique
Acquires three measurement points through 180° rotation
Computes accurate mechanical position information
Suitable for comparison and analysis with traditional methods
Chapter 7 Data Processing and Quality Management
7.1 Measurement Result Evaluation
Angle and offset values jointly determine alignment quality
Compare actual values with preset tolerance standards for analysis
Evaluation results directly determine whether further corrections are needed
Chapter 8 Analysis of Professional Application Technologies
8.1 Softcheck Soft Foot Detection
Uses the built-in Softcheck program system for detection
Provides precise measurements and displays results for each foot (in millimeters or mils)
8.2 OL2R Application Technology
Measurement Condition Requirements
Must be performed under both operating and cold conditions
The system automatically calculates and evaluates process variables
8.3 Target Value Presetting Technology
Preset Condition Analysis
Most equipment generates heat changes during operation
Ideally, the driven and driving equipment are affected to the same extent
Enables target value presetting under cold conditions
Chapter 9 Professional Maintenance Requirements
9.1 Cleaning Operation Procedures
The system surface should be wiped with a damp cotton cloth or swab
Laser diode apertures and detector surfaces must be kept clean
Do not use any type of paper towel material
Strictly prohibit the use of acetone-based organic solvents
9.2 Power Management Maintenance
Battery Service Life
Under normal usage conditions, the battery life is typically valid for approximately 2-3 years
9.3 Battery Charging Specifications
Full charging time is approximately 8 hours
When not in use for an extended period, charge to 50-75% capacity
It is recommended to perform maintenance charging every 3-4 months
Chapter 10 Fault Diagnosis and Repair Procedures
10.1 System Anomaly Detection
Check battery level
Confirm good charging status
Ensure Bluetooth device connection is normal
Chapter 11 Quality Assurance System
11.1 Repeatability Testing
Must be performed before each measurement
Establish correct sampling time parameter settings
Effectively avoid the influence of external environmental factors
Chapter 12 Technological Development Trends
12.1 Intelligent Development Directions
Integration of Internet of Things (IoT) technology
Remote monitoring and diagnostic capabilities
Application of digital twin technology
12.2 Precision Development Directions
Continuous improvement in measurement accuracy
Optimization and improvement of operational procedures
Expansion and enhancement of system functions
Through an in-depth technical analysis of the Fixturlaser NXA series products, operators can fully grasp the core technological points of the equipment, thereby fully leveraging its significant value in the field of industrial equipment maintenance. This enables a notable increase in equipment operational efficiency and reasonable control over maintenance costs.
The Easy-Laser E420 is a laser-based shaft alignment system designed specifically for the alignment operations of horizontally and vertically installed rotating machinery, such as pumps, motors, gearboxes, etc. This system utilizes high-precision laser emitters and Position Sensitive Detectors (PSDs) to capture alignment deviations in real-time and guides users through adjustments with intuitive numerical and graphical interfaces. This guide combines the core content of the user manual and provides detailed explanations on equipment composition, operation procedures, functional settings, and maintenance to help users fully master the usage methods of the device.
II. Equipment Composition and Key Components
System Components
Measurement Units (M Unit and S Unit): Installed on the fixed end and the movable end respectively, transmitting data via wireless communication.
Display Unit E53: Equipped with a 5.7-inch color backlit display, featuring a built-in lithium battery that supports up to 30 hours of continuous operation.
Accessory Kit: Includes shaft brackets, chains, extension rods (60mm/120mm), measuring tapes, power adapters, and data management software, etc.
Technical Specifications
Resolution: 0.01 mm (0.5 mil)
Measurement Accuracy: ±5µm ±1%
Laser Safety Class: Class 2 (power <0.6mW)
Operating Temperature Range: -10°C to +50°C
Protection Rating: IP65 (dustproof and waterproof)
III. Equipment Initialization and Basic Settings
Display Unit Operation
Navigation and Function Keys: Use the directional keys to select icons or adjust values, and the OK key to confirm operations. Function key icons change dynamically with the interface, with common functions including returning to the previous level, saving files, and opening the control panel.
Status Bar Information: Displays the current unit, filtering status, battery level, and wireless connection status.
Screen Capture: Press and hold the “.” key for 5 seconds to save the current interface as a JPG file, facilitating report generation.
Battery and Charging Management
Charging Procedure: Connect the display unit using the original power adapter and charge up to 8 measurement units simultaneously via a distribution box.
Low Battery Alert: An LED red light flashes to indicate the need for charging, a green light flashes during charging, and remains lit when fully charged.
Temperature Considerations: The charging environment should be controlled between 0°C and 40°C, with faster charging speeds in the off state.
System Settings
Language and Units: Supports multiple languages, with unit options for metric (mm) or imperial (mil).
IV. Detailed Measurement Procedures
Horizontal Alignment (Horizontal Program)
Installation Steps: Fix the S unit on the stationary machine and the M unit on the movable machine, ensuring relative positional offset. Align the laser beams with the targets on both sides using adjustment knobs. When using wireless functionality, search for and pair the measurement units in the control panel.
Measurement Modes:
EasyTurn™: Allows recording three measurement points within a 40° rotation range, suitable for space-constrained scenarios.
9-12-3 Mode: Requires recording data at the 9 o’clock, 12 o’clock, and 3 o’clock positions on a clock face.
Result Analysis: The interface displays real-time horizontal and vertical offsets and angular errors, with green indicators showing values within tolerance ranges.
Vertical Alignment (Vertical Program)
Applicable Scenarios: For vertically installed or flange-connected equipment.
Key Parameter Inputs: Include measurement unit spacing, bolt quantity (4/6/8), bolt circle diameter, etc.
Adjustment Method: Gradually adjust the machine base height and horizontal position based on real-time values or shim calculation results.
Softfoot Check
Purpose: To check if the machine feet are evenly loaded, avoiding alignment failure due to foundation distortion.
Operation Procedure: Tighten all anchor bolts. Sequentially loosen and retighten individual bolts, recording detector value changes.
Result Interpretation: Arrows indicate the machine tilt direction, requiring shim adjustments for the foot with the largest displacement.
V. Advanced Functions and Data Processing
Tolerance Settings (Tolerance)
Preset Standards: Based on rotational speed分级 (e.g., 0–1000 rpm corresponds to a 0.07mm offset tolerance), users can also customize tolerance values.
File Management
Saving and Exporting: Supports saving measurement results as XML files, which can be copied to a USB drive or associated with equipment data via barcodes.
Favorites Function: Save commonly used machine parameters as “FAV” files for direct recall later.
Filter Adjustment (Filter)
Function: Suppresses reading fluctuations caused by temperature variations or vibrations.
Setting Recommendations: The default value is 1, typically using levels 1–3 for filtering, with higher values providing greater stability but taking longer.
Thermal Compensation (Thermal Compensation)
Application Scenarios: Compensates for height changes due to thermal expansion during machine operation. For example, when thermal expansion is +5mm, a -5mm compensation value should be preset in the cold state.
VI. Calibration and Maintenance
Calibration Check
Quick Verification: Use a 0.01mm tolerance to lift the measurement unit by 1mm using shims and verify if the readings match the actual displacement.
Safety Precautions
Laser Safety: Never look directly into the laser beam or aim it at others’ eyes.
Equipment Warranty: The entire unit comes with a 3-year warranty, but the battery capacity warranty period is 1 year (requiring maintenance of at least 70% capacity).
Prohibited Scenarios: Do not use in areas with explosion risks.
VII. Troubleshooting and Technical Support
Common Issues
Unstable Readings: Check for environmental temperature gradients or airflow influences, and increase the filtering value.
Unable to Connect Wireless Units: Ensure that the units are not simultaneously using wired connections and re-search for devices in the control panel.
Service Channels
Equipment must be repaired or calibrated by certified service centers. Users can query global service outlets through the official website.
VIII. Conclusion
The Easy-Laser E420 significantly enhances the efficiency and accuracy of shaft alignment operations through intelligent measurement procedures and intuitive interactive interfaces. Users should strictly follow the manual steps for equipment installation, parameter input, and result analysis, while making full use of advanced functions such as file management and thermal compensation to meet complex operational requirements. Regular calibration and standardized maintenance ensure long-term stable operation of the equipment, providing guarantees for industrial equipment safety.
The TSI VelociCalc 9565 series multifunction air velocity meters, manufactured by TSI Incorporated (USA), are among the most recognized instruments for ventilation testing and cleanroom airflow diagnostics. Their modular design allows the main unit to connect to a variety of intelligent probes through a standard 7-pin Mini-DIN interface, enabling simultaneous measurements of air velocity, airflow, temperature, humidity, CO, CO₂, VOC, and differential pressure.
This article focuses on a specific configuration:
Main unit: TSI 9565-P-NB, a multifunction meter equipped with a differential-pressure sensor (the “-NB” suffix indicates no Bluetooth).
Probe: TSI 964 hot-film probe for air velocity, temperature, and relative humidity.
Together they provide comprehensive readings of velocity, volumetric flow, temperature, humidity, and static/differential pressure, widely used in:
Fume-hood face-velocity tests;
Cleanroom laminar-flow verification;
HVAC air-balancing and commissioning;
Energy-efficiency assessments of ventilation systems.
2. Working Principle and Structural Overview
2.1 Hot-film anemometry
The 964 probe employs a constant-temperature hot-film anemometer. Its sensing element is a precision platinum film that is electrically heated above ambient temperature.
When air passes over the sensor, convective cooling occurs;
The electronic bridge circuit maintains a fixed temperature difference ΔT;
The current required to maintain ΔT is proportional to the square of air velocity;
The resulting signal is linearized and temperature-compensated to yield the velocity reading (m/s).
The probe also houses a temperature and humidity module, ensuring density compensation and stable performance over a wide range of conditions.
2.2 Differential-pressure module
The 9565-P-NB main unit integrates a ±15 in H₂O (±3735 Pa) differential-pressure sensor. Through the positive (+) and negative (–) ports, the meter can measure static or differential pressure and compute velocity using a Pitot tube. Accuracy is specified as ±1 % of reading ±1 Pa.
2.3 Probe-to-main-unit interface
The 7-pin Mini-DIN connector at the base of the instrument provides:
+5 VDC power to the probe;
Analog signal inputs (velocity, temperature, humidity);
A digital line for probe identification and calibration coefficients.
Once connected, the main unit automatically reads the probe’s ID EEPROM, displays its model, and activates relevant measurement menus. If this recognition fails, the instrument shows “Probe Error” and all velocity-related readings remain at 0.00 m/s.
3. Normal Operation Guidelines
3.1 Power-up and warm-up
According to the manual (Chapter 3), the instrument should warm up for about five minutes after power-on before performing pressure zeroing. This stabilizes the internal sensors and reference voltages.
3.2 Probe orientation and insertion
The orientation dimple on the probe must face upstream.
At least 3 in (7.5 cm) of the probe should be exposed to the airflow to ensure that both the temperature and humidity sensors are fully in the airstream.
Extend the telescopic rod by pulling on the metal tube, never by the cable, to avoid internal wire breakage.
3.3 Display configuration
In the Display Setup menu, up to five parameters can be shown simultaneously (one primary in large font and four secondary). Typical configuration:
Primary: Flow (L/s or CFM) or Velocity (m/s or fpm)
Note: “Pitot Velocity” and “AF Probe Velocity” cannot be active at the same time; only one may be ON or set as PRIMARY.
4. Root-Cause Analysis of “Zero Airflow / Zero Velocity” Symptoms
A frequently reported issue is that the display suddenly shows 0.00 m/s velocity and 0.00 L/s flow, while pressure values remain valid. Based on the manual and field experience, the following causes are most probable.
4.1 Probe recognition failure (most common)
If the main unit cannot read the probe’s EEPROM data, only built-in channels (pressure, temperature, baro) appear, while velocity stays at zero. The troubleshooting table lists:
Symptom: Probe plugged in, but instrument does not recognize it. Cause: Probe was inserted while instrument was ON. Action: Power OFF the unit and turn it ON again.
If the problem persists:
Connector pins may be oxidized or bent;
The probe ID circuit or EEPROM may be defective.
4.2 Burned or open-circuit hot-film element
Inside the 964 probe, the micro-thin film (<100 µm) can be destroyed by high temperature, moisture, or dust contamination. Typical signs:
The probe model appears correctly in the menu;
All velocity readings remain 0.00;
No error message displayed.
Measuring resistance between signal pins with a multimeter helps confirm: an open circuit indicates sensor burnout.
4.3 Incorrect measurement setup
If “Velocity” or “Flow” parameters are disabled in the Display Setup, or if Flow is set as PRIMARY without enabling Velocity as a secondary, the display will not show airflow data.
4.4 Cable or connector damage
Frequent bending or improper storage can break internal wires. Symptoms include intermittent readings when the cable is moved or total loss of signal.
4.5 Faulty probe port on the main unit
When even a known-good probe is not recognized, the main unit’s connector solder joints or signal amplifier may be defective. The manual specifies: “Factory service required on instrument.”
5. Systematic Troubleshooting Procedure
Step
Inspection
Expected Result
Corrective Action
①
Re-plug probe with power off
Unit recognizes probe after restart
If normal → software/recognition issue
②
Check “Probe Info” menu
Displays “964 Probe SN xxxx”
If blank → contact/ID circuit fault
③
Verify Display Setup
Velocity = ON, Flow = ON
If still 0 → hardware failure
④
Swap probe
New probe works
Original probe damaged
⑤
Measure pin resistance
Several hundred–kΩ
Open circuit → hot-film burned
⑥
Restore factory settings / calibration
Reset configuration
If unchanged → return for service
6. Maintenance and Calibration Recommendations
6.1 Routine care
Keep probes clean; avoid oily or dusty airflows.
After use, gently blow dry air across the sensor head.
Store in a dry environment, away from direct sunlight.
Remove batteries during long-term storage to prevent leakage.
6.2 Calibration interval
TSI recommends annual factory calibration to maintain traceable accuracy. Field calibration via the CALIBRATION menu is possible but only for minor adjustments; full calibration must be performed by TSI or an authorized lab.
6.3 Typical calibration specifications
Parameter
Range
Accuracy
Velocity
0 – 50 m/s
±3 % of reading or ±0.015 m/s
Temperature
–10 – 60 °C
±0.3 °C
Relative Humidity
5 – 95 % RH
±3 % RH
Differential Pressure
±3735 Pa
±1 % of reading ± 1 Pa
7. Mechanism of Hot-film Probe Failure
Hot-film velocity sensors are extremely sensitive and delicate. Typical failure mechanisms include:
Burnout of heating element — due to transient over-current or contact bounce;
Condensation — moisture films short or isolate the element;
Cable fatigue — repeated bending leads to conductor breakage.
Failures 1 and 4 are the primary causes of complete loss of velocity signal (“0 m/s”). During repair, check:
Continuity between connector pins and the sensor head;
Visual inspection for dark or cracked sensing film;
Cross-test using another known-good probe.
8. Case Study: Field Repair Example
Background
A laboratory used a TSI 9565-P-NB + 964 probe to measure fume-hood airflow. After about three years of service, the display suddenly showed:
Pressure fluctuating normally, but velocity = 0.00 m/s and flow = 0.00 L/s.
Diagnosis
Probe information visible → communication OK.
Re-plugging did not help.
Sensor head inspection revealed blackened film.
Pin resistance = open circuit.
Resolution
Replaced the 964 probe with a new one.
Instrument operated normally.
Post-calibration deviation < 1.8 %.
Conclusion: The zero-airflow symptom was caused by an open-circuit hot-film element.
9. Using Differential-Pressure Mode as Backup
Even when the velocity probe fails, the 9565-P-NB can still measure airflow via Pitot tube + pressure ports:
Connect Pitot total pressure to “+” port and static pressure to “–”;
Select Flow Setup → Pressure/K-factor and input duct dimensions;
The instrument converts ΔP to velocity using standard equations.
This method provides a temporary substitute for velocity readings until the probe is repaired.
10. Safety and Usage Notes
Avoid electrical hazards: never use near live high-voltage sources.
Do not open the case: user disassembly voids warranty.
Operating limits:
Main unit: 5 – 45 °C
Probe: –10 – 60 °C
Maximum overpressure: 7 psi (48 kPa); exceeding this may rupture the pressure sensor.
11. Conclusion
The TSI 9565-P-NB VelociCalc is a high-precision, versatile instrument integrating differential-pressure, velocity, and humidity measurements in one compact platform. However, in practical field use, the common “airflow = 0” fault is rarely caused by the main unit. Instead, it almost always results from probe recognition failure or hot-film sensor damage.
Adhering to proper operating procedures—power-off insertion, warm-up before zeroing, periodic cleaning, and annual calibration—greatly extends probe life and maintains accuracy.
For maintenance engineers, understanding the signal flow and failure signatures enables quick fault localization and minimizes downtime. For facility managers, implementing a calibration and maintenance log ensures data reliability for HVAC system validation.
Polarimeters are widely used analytical instruments in the food, pharmaceutical, and chemical industries. Their operation is based on the optical rotation of plane-polarized light when it passes through optically active substances. Starch, a fundamental carbohydrate in agricultural and food processing, plays a crucial role in quality control, formulation, and trade evaluation. Compared with chemical titration or enzymatic assays, the polarimetric method offers advantages such as simplicity, high precision, and good repeatability — making it a preferred technique in many grain and food laboratories.
The WZZ-3 Automatic Polarimeter is one of the most commonly used models in domestic laboratories. It provides automatic calculation, digital display, and multiple measurement modes, and is frequently employed in starch, sugar, and pharmaceutical analyses. However, in shared laboratory environments with multiple users, problems such as slow measurement response, unstable readings, and inconsistent zero points often occur. These issues reduce measurement efficiency and reliability.
This paper presents a systematic technical discussion on the WZZ-3 polarimeter’s performance in crude starch content measurement, analyzing its optical principles, operational settings, sample preparation, common errors, and optimization strategies, to improve measurement speed and precision for third-party laboratories.
2. Working Principle and Structure of the WZZ-3 Polarimeter
2.1 Optical Measurement Principle
The fundamental principle of polarimetry states that when plane-polarized light passes through an optically active substance, the plane of polarization rotates by an angle α, known as the angle of optical rotation. The relationship among the angle of rotation, specific rotation, concentration, and path length is expressed by:
[ \alpha = [\alpha]_{T}^{\lambda} \cdot l \cdot c ]
Where:
([\alpha]_{T}^{\lambda}) — specific rotation at wavelength λ and temperature T
(l) — optical path length (dm)
(c) — concentration of the solution (g/mL)
The WZZ-3 employs monochromatic light at 589.44 nm (sodium D-line). The light passes sequentially through a polarizer, sample tube, and analyzer. The instrument’s microprocessor system then detects the angle change using a photoelectric detector and automatically calculates and displays the result digitally.
2.2 System Composition
Module
Function
Light Source
Sodium lamp or high-brightness LED for stable monochromatic light
Polarization System
Generates and analyzes plane-polarized light
Sample Compartment
Holds 100 mm or 200 mm sample tubes; sealed against dust and moisture
Photoelectric Detection
Converts light signal changes into electrical data
Control & Display Unit
Microcontroller computes α, [α], concentration, or sugar degree
Keypad and LCD
Allows mode selection, numeric input, and measurement display
The internal control logic performs automatic compensation, temperature correction (if enabled), and digital averaging, ensuring stable readings even under fluctuating light conditions.
3. Principle and Workflow of Crude Starch Determination
3.1 Measurement Principle
Crude starch samples, after proper liquefaction and clarification, display a distinct right-handed optical rotation. The optical rotation angle (α) is directly proportional to the starch concentration. By measuring α and applying a standard curve or calculation formula, the starch content can be determined precisely. The clarity and stability of the solution directly affect both response speed and measurement accuracy.
3.2 Sample Preparation Procedure
Gelatinization and Enzymatic Hydrolysis Mix the sample with distilled water and heat to 85–90 °C until completely gelatinized. Add α-amylase for liquefaction and then glucoamylase for saccharification at 55–60 °C until the solution becomes clear.
Clarification and Filtration Add Carrez I and II reagents to remove proteins and impurities. After standing or centrifugation, filter the supernatant through a 0.45 µm membrane.
Temperature Equilibration and Dilution Cool the filtrate to 20 °C, ensuring the same temperature as the instrument environment. Dilute to the calibration mark.
Measurement
Use distilled water as a blank for zeroing.
Fill the tube completely (preferably 100 mm optical path) and remove all air bubbles.
Record the optical rotation α.
If the rotation angle exceeds the measurable range, shorten the path or dilute the sample.
4. Common Problems and Causes of Slow Response in WZZ-3
During routine use, several factors can cause the WZZ-3 polarimeter to exhibit delayed readings or unstable results.
4.1 Misconfigured Instrument Parameters
When multiple operators use the same instrument, settings are frequently modified unintentionally. Typical parameter issues include:
Setting
Correct Value
Incorrect Setting & Effect
Measurement Mode
Optical Rotation
Changed to “Sugar” or “Concentration” — causes unnecessary calculation delay
Averaging Count (N)
1
Set to 6 or higher — multiple averaging cycles delay output
Time Constant / Filter
Short / Off
Set to “Long” — slow signal processing
Temperature Control
Off / 20 °C
Left “On” — instrument waits for thermal stability
Tube Length (L)
Actual tube length (1 dm or 2 dm)
Mismatch — optical signal weakens, measurement extended
These misconfigurations are the most frequent cause of slow response.
4.2 Low Transmittance of Sample Solution
If the sample is cloudy or contains suspended solids, the transmitted light intensity decreases. The system compensates by extending the integration time to improve the signal-to-noise ratio, resulting in a sluggish display. When transmittance drops below 10%, the detector may fail to lock onto the signal.
4.3 Temperature Gradient or Condensation
A temperature difference between the sample and the optical system can cause condensation or fogging on the sample tube surface, scattering the light path. The displayed value drifts gradually until equilibrium is reached, appearing as “slow convergence.”
4.4 Aging Light Source or Contaminated Optics
Sodium lamps or optical windows degrade over time, lowering light intensity and forcing the system to prolong measurement cycles. Symptoms include delayed zeroing, dim display, or low-intensity readings even with clear samples.
4.5 Communication and Software Averaging
If connected to a PC with data logging enabled (e.g., 5 s sampling intervals or moving average), both display and response speed are limited by software settings. This is often mistaken for hardware delay.
5. Standardized Parameter Settings and Optimization Strategy
5.1 Recommended Standard Configuration
Parameter
Recommended Setting
Note
Measurement Mode
Optical Rotation
Direct α measurement
Tube Length
Match actual tube (1 dm or 2 dm)
Prevent calculation mismatch
Averaging Count (N)
1
Fastest response
Filter / Smoothing
Off
Real-time display
Time Constant
Short or Auto
Minimizes integration time
Temperature Control
Off
For room-temperature samples
Wavelength
589.44 nm
Sodium D-line
Output Mode
Continuous / Real-time
Avoid print delay
Gain
Auto
Optimal signal balance
These baseline parameters restore the instrument’s “instant response” behavior.
5.2 Operational Workflow
Blank Calibration
Fill the tube with distilled water.
Press “Zero.” The display should return to 0.000° within seconds.
If slow, inspect optical or parameter issues.
Sample Measurement
Load the prepared starch solution.
The optical rotation should stabilize within 3–5 seconds.
Larger delays indicate improper sample or configuration.
Data Recording
Take three consecutive readings.
Acceptable repeatability: standard deviation < 0.01°.
Calculate starch concentration via calibration curve.
Post-Measurement Maintenance
Rinse the tube with distilled water.
Perform “factory reset” weekly.
Inspect lamp intensity and optical cleanliness quarterly.
6. Laboratory Management Under Multi-User Conditions
When multiple technicians share the same WZZ-3 polarimeter, management and configuration control are crucial to maintaining consistency.
6.1 Establish a “Standard Mode Lock”
Some models support saving user profiles. Save the optimal configuration as “Standard Mode” for automatic startup recall. If unavailable, post a laminated parameter checklist near the instrument.
6.2 Access Control and Permissions
Lock or password-protect “System Settings.” Only administrators may adjust system parameters, while general users perform only zeroing and measurement.
6.3 Routine Calibration and Verification
Use a standard sucrose solution (26 g/100 mL, α = +13.333° per 100 mm) weekly to verify precision.
If the response exceeds 10 s or deviates beyond tolerance, inspect light intensity and alignment.
6.4 Operation Log and Traceability
Maintain a Polarimeter Usage Log recording:
Operator name
Mode and settings
Sample ID
Response time and remarks
This allows quick identification of anomalies and operator training needs.
Such standardization minimizes human error and prolongs equipment life.
7. Case Study: Diagnosing Slow Measurement Response
A food processing laboratory reported a sudden increase in measurement time — from 3 s to 15–30 s per sample.
Investigation Findings:
Mode = Optical Rotation (correct).
Averaging Count (N) = 6; “Smoothing” = ON.
Sample solution slightly turbid and contained micro-bubbles.
Temperature control enabled but sample not equilibrated.
Corrective Measures:
Reset N to 1 and disable smoothing.
Filter and degas the sample solution.
Turn off temperature control or match temperature to ambient.
Result: Response time returned to 4 s, with excellent repeatability.
Conclusion: Measurement delay often stems from combined human and sample factors. Once parameters and preparation are standardized, the WZZ-3 performs rapidly and reliably.
8. Maintenance and Long-Term Stability
Long-term accuracy requires regular optical and mechanical maintenance.
Maintenance Item
Frequency
Description
Optical Window Cleaning
Monthly
Wipe with lint-free cloth and anhydrous ethanol
Light Source Inspection
Every 1,000 h
Replace aging sodium lamp
Environmental Conditions
Always
Keep in stable 20 ± 2 °C lab with minimal vibration
Power Supply
Always
Use independent voltage stabilizer
Calibration
Semi-annually
Verify with standard sucrose solution
By adhering to this preventive maintenance schedule, the WZZ-3 maintains long-term reliability and reproducibility.
9. Discussion and Recommendations
The WZZ-3 polarimeter’s digital architecture provides high precision but is sensitive to user settings and sample clarity. Slow responses, unstable zeroing, or delayed results are rarely caused by hardware faults — they are almost always traceable to:
Averaging or smoothing functions enabled;
Temperature stabilization waiting loop;
Cloudy or bubble-containing samples;
Aging optical components.
To prevent recurrence:
Always restore “fast response” configuration before measurement.
Use filtered, degassed, and temperature-equilibrated samples.
Regularly calibrate with sucrose standards.
Document all measurements and configuration changes.
Proper user discipline, combined with parameter locking and preventive maintenance, ensures the WZZ-3’s continued performance.
10. Conclusion
The WZZ-3 Automatic Polarimeter is a reliable and efficient instrument for crude starch content analysis when properly configured and maintained. In multi-user laboratories, incorrect parameter settings — especially averaging, smoothing, and temperature control — are the primary causes of slow or unstable readings.
By implementing the following practices:
Standardize instrument settings,
Match optical path length to actual sample tubes,
Maintain sample clarity and temperature equilibrium,
Enforce configuration management and operator training,
laboratories can restore fast, accurate, and reproducible measurement performance.
Furthermore, establishing a calibration and documentation system ensures long-term stability and compliance with analytical quality standards.