Historically, if you wanted to automate welding of a part, you needed two key factors: no gaps at the joint and consistency to ensure repeatability. However, in some situations it may be impossible to have one or both of those ingredients.
Paired with the proper software, adaptive welding sensors can drastically improve part quality and consistency while reducing downtime caused by adjustments to fixturing and/or robot programming. Sensors used specifically for robotic welding applications typically fall into four categories: touch, through-arc, laser and vision. Likewise, they have three primary functions: seam finding, seam tracking, and/or part scanning, which often can also be used for inspection. Each function features unique benefits depending on the part and expected outcome, and most technologies can be mixed and matched, where use is not redundant.
If you are looking for more details, kindly visit our website.
“What is the difference between seam finding and seam tracking?” and “How do I know when to use seam finding vs. seam tracking?” These are common questions our robotic welding experts are frequently asked. With that in mind, here are several things to consider when deciding how to proceed with your robotic welding process:
For a robot to precisely locate a weld joint before welding begins, high-speed seam finding or joint finding is recommended. Work pieces will inevitably have some range of variation, but your goal is to minimize that variation with spec’d parts and fixturing, and be within the half-width of a weld wire into your joint seam. This process can be done in several ways via various technologies, enabling the robot to find the weld joint.
Once the seam is discovered by finding usually two or more known points on the part, the program path is shifted by the robot to complete the weld. The type of seam finding required is dictated by two primary factors: the expected cycle time and the type of joint.
Seam finding is one of the most popular welding functions, and is often achieved through the following tactile options:
Touch Sensing – Ideal for finding the orientation of parts with simple joints and geometries, this method, also known as “wire touch”, involves the physical touch of a weld wire from the end of the torch to detect the conductive surface of the part about to be welded. The slow speed of the robot and the eventual touch complete a circuit with a low amount of voltage fed through the wire. This can also be done with the nozzle of the torch in some scenarios. Completed through built-in features on a welding power supply designed for automation, systems like Yaskawa’s Touch Sense package use a low voltage circuit during a low-speed search to determine the best position for the weld joint.
Pros:
Low complexity; Built-in pendant commands
Works on all conductive material
Easy to teach with macro jobs
Does not interfere with joint access
No external hardware is required on robot
Performs multiple searches with one wire cut
Locates most lap and fillet joint types; can also be used with V butt joints
Offers a lower cost option
Cons:
Requires a wire cutter/wire brake (optional)
Limited to lap joint thickness (>3 mm)
Slower vs. laser or camera
Limited ability to detect joint gap
Cannot find square butt joints
Wire Sensing – Similar to touch sense, where a wire from the torch makes tactile contact with the part, this option uses a servo motor in the torch to rapidly move the wire up and down while the robot moves across the part. This enables easy location of lap joints, and it can measure items like material height and gaps. Offered through Fronius, the Fronius Wire Sense software option provides great efficiency.
Pros:
Can detect joints like butt joints that cannot be easily found using traditional static wire or nozzle touch sense
Can be used for lap joints less than 3 mm
Ability to measure part height offsets and gap width and depths
Cons:
Requires specific hardware and software license from Fronius
Slower vs. laser or camera
Not available on all brands of welding power supplies
Laser Point Sensing – Two to five times faster than touch sensing, the use of a basic, laser dot sensor (that is mounted to the weld torch) captures the location and orientation of a part nearly as quickly as the laser fires, providing fast and accurate seam finding. Capable of working with any welding power supply, Yaskawa’s AccuFast™ non-contact laser sensing solution provides a cost-effective option between tactile and vision sensing solutions.
Pros:
Low- to medium-complexity; Some training required with built-in commands
Works for most materials
Easy to teach with macro jobs
Uses a non-contact sensor
Faster search speeds and touch sensing
Eliminates the need for a wire cutter
Finds most joint types, detecting lap joints down to 1/16” thick
Cons:
Sensor box is mounted adjacent to the torch
Mounting bracket per torch type
Limit in lap joint thickness (>1.5 mm)
Limited ability to detect joint gap
Cannot find square butt joints
Highly reflective material requires evaluation
Laser Seam Finding – Capable of picking up more characteristics in a single scan over a laser dot sensor, the utilization of a profile laser interface, such as Yaskawa’s MotoEye™ SF, provides extremely fast joint measurement. This solution works well with a sensing device that uses 3D multi-laser range imaging optics to provide the needed measurements/joint gap data to the robot before welding begins. Options from SERVO-ROBOT’s i-CUBE™, ABICOR BINZEL/Scansonic and Wenglor work with Yaskawa’s MotoEye SF pendant interface.
Pros:
Works on different materials in all lighting
Easy to teach with macro jobs
Provides joint gap data
Long focal length; mount away from arc
Locates 2.5D; offset and depth
Compact and self-contained
I/O interface can be retrofit to older controls
Cons:
Medium- to high-complexity; Training on vision system suggested
May restrict access into part/tooling
40 mm FOV may require multiple searches for large offsets
Often simplifying programming, this option uses innovative technology to equip the robot to track the weld position in real time, during the welding process. Seam tracking is popular for applications where distortion can occur while welding a part or for heavy cast parts, and it is commonly performed using the following methods:
Through-the-arc Seam Tracking – Best for parts with long or curved seams, varying from part to part, a though-the-arc seam tracker, like Yaskawa’s ComArc LV (low voltage), utilizes a solid-state sensor mounted near the welding power supply to actively measure arc characteristics during the weld sequence. This determines variations between a robot’s taught path and the actual seam path.
Pros:
Low complexity
Reliable sensor and easy to support
Passover function restricts sensor error
Phase Compensation calibrates weld cirucit
Can track lap joints 1/8 in. or 3 mm thick
Supports dual robots and coordinated motion
Offers a lower cost option
Cons:
You will get efficient and thoughtful service from Yinglai.
Requires weaving and thicker material
Limited by arc/weld physics
Requires a pre-weld search to find the weld joint
Laser Seam Tracking – Suggested for thin material with varying seams that demand the fastest cycle time possible, this method combines a high-performance laser with a high-speed controller to find the seam and part location in real time while the part is being welded. A dedicated program compensates the path, as well as adapts to welding parameters for seam location and variation. Yaskawa’s MotoEye LT or SERVO-ROBOT’s DIGI-I/Power-cam products work well for this.
Pros:
Reliably tracks thin gauge lap joint
Supports high travel speeds (>100 IPM)
Weaving Motion with tracking possible
Tracking is not affected by weld settings
Supports coordinated motion
Ethernet interface available
Camera hardened against welding arc
Adaptive welding function; speed and weld settings
Cons:
High-complexity and often high cost; Training on vision system required
Torch-mounted sensor restricts joint access
Tracking radii is limited to 40-60 mm
Limited to two robots on one system
Dual Laser Seam Tracking – To optimize cycle time, sometimes, two robots are equipped with seam tracking technology to work in unison. This utilizes the same interface and technology mentioned before, but can cut cycle time in half and reach more weld joints on larger or complex parts than a single robot.
Robotized laser beam welding (LBW) is critical for high-quality metal joining in manufacturing but faces challenges due to variations in joint preparation, fixturing, robot trajectory accuracy, and heat-induced distortions. These factors often cause joint and beam misalignment, or joint and beam offset, particularly problematic when welding closed square butt joints with a tightly focused laser beam (LB). Even small offsets can lead to side-wall fusion defects, which are difficult to detect using non-destructive techniques like ultrasonic testing due to the defect’s flatness and unfavorable orientation [1].
Industrial robots used in LBW exhibit high repeatability but low accuracy, especially during complex trajectories requiring LBW head reorientation [2]. This results in inaccuracies between CAD/CAM designs and physical joints, necessitating costly and time-consuming interventions.
Two key strategies address robotic trajectory inaccuracies: joint findingand seam tracking. Joint finding, a preprocessing step, improves accuracy in intricate joints, especially when LBW head reorientation is required [3, 4]. Performance can be enhanced by adding redundant degrees of freedom in robotic manipulators for curved joints [5]. Seam tracking, or joint tracking, is an in-process method that estimates joint positions during welding and adjusts the LB via external actuators. Seam tracking is advantageous as it compensates for joint deviations caused by heat-induced distortions and improves productivity.
Commercial seam tracking systems often use structured light and laser triangulation to acquire perpendicular distance profiles of joints. Examples include systems from Permanova Lasersystem AB (Sweden) [6], Meta Vision Systems (UK and Canada) [7], Servo-Robot (Canada) [8], and German companies like Precitec [9], Scansonic [10], Falldorf Sensor [11], and Scout Vision [12]. While effective for joints with clear geometric disparities, these systems struggle with closed square butt joints, defined as having a zero gap (≤ 0.1 mm) and negligible misalignment (≤ 0.1 mm). Triangulation methods also face limitations due to sensor forerun—the distance between the measurement point and the LB spot—which impacts manipulator reachability and the minimum tracking radius for curved joints [13].
Developing joint tracking systems for industrial LBW involves overcoming challenges in cost-efficiency, flexibility, robustness, and adherence to stringent performance standards. Maintenance and software upgrades must also remain cost-effective. Sensor systems integrated into LBW heads must be non-contact, non-intrusive, and capable of withstanding harsh conditions such as heat radiation, laser scattering, metal vaporization, spectral emissions, smoke, and spatter. These systems should not compromise the mechanical flexibility and accessibility of the LBW head. Signal processing and control algorithms must exhibit robustness, determinism, and reliability, even in challenging conditions like tack welds or surface scratches near the joint.
Before presenting the proposed joint tracking system solution, a comparative analysis of existing approaches is conducted to underscore the advantages of the suggested method. Recent research efforts addressing the challenges of closed square butt joint tracking in LBW are reviewed, with a focus on works published since . This review categorizes the studies based on sensing principles, reflecting the understanding that the performance of joint tracking systems is predominantly determined by the capabilities of the sensing and actuation hardware, rather than by signal processing techniques. It is important to note that systems developed for arc welding are excluded from this discussion due to their fundamentally different requirements compared to LBW.
Gao et al. introduced a novel application in [14] and [15] of a magneto-optical sensor paired with a magnetic field generator for joint and beam offset estimation. While the exact sensor positioning relative to the beam spot remains undisclosed, the substantial sensor forerun is apparent. The electromagnet was placed on the workpiece’s root side. The proposed approach involved employing a Kalman filter [16] with a constant velocity model for the estimation while mitigating Gaussian white noise inherent in the system and measurements. The research demonstrated commendable tracking performance; however, the inclusion of an electromagnet in the setup may present challenges in most industrial contexts. Notably, the substantial sensor forerun and shallow depth of field serve as a limitation, and the system’s applicability is restricted to magnetic materials. Additionally, the findings presented are limited to straight linear joints.
In another paper by Gao et al. [17], they suggest an off-axis high-speed near-infrared camera equipped with optical filters to estimate the joint and beam offset. The study’s findings indicated a correlation between the thermal distribution of the molten pool and the degree of offset, leading the authors to propose a dynamic model for estimating this offset. However, it is important to note that the off-axis camera configuration and its size reduce the flexibility of the LBW head in comparison to solutions incorporating integrated sensors. Furthermore, the presented results are exclusively applicable to straight linear joints.
Huang et al. introduced in [18] and [19] a joint and beam offset estimation utilizing three laser stripes. Two of these stripes facilitated the acquisition of a 3D profile through optical triangulation, while the third stripe served as active illumination for offset estimation. The study demonstrated promising outcomes, but it remains constrained by the inherent limitations of the triangulation approach previously discussed.
Shao et al. [20] introduced an innovative triangulation system for joint and beam offset estimation. This system employed a vision camera and an acoustic-optic modulated laser with a constant frequency intensity-modulated laser line. By employing Fourier transform on a synchronized image sequence, this setup effectively mitigated processing noise, eliminating noise of random frequency in the frequency domain. While promising results were demonstrated, the method remains subject to the limitations inherent in the triangulation approach. Furthermore, the system’s evaluation did not encompass curved joints.
Wang et al. [21] employed a narrow depth-of-field lens to achieve precise estimation of joint and beam offset and to contribute to the advancement of joint tracking. The use of this lens in the vision system allowed for selective focus on the target area. Subsequently, a compounded image algorithm was applied to estimate the joint’s centerline, gap width, and normal vector. While the authors reported high precision in their results from selected joint images, details on the accuracy assessment method are absent. Moreover, it is still unclear whether these selected images were captured under realistic welding conditions, including manipulator motion and active welding. Additional practical concerns persist, including the limitations imposed by the extremely shallow depth of field, the absence of information about sensor forerun, and the proximity of the lens system to the challenging environment of the interaction zone with factors such as heat radiation, spatter, and smoke. Additionally, the presented results exclusively pertain to straight linear joints.
In [22], a noteworthy texture-based algorithm was proposed by Krämer et al., utilizing surface texture disparities between the two workpiece members in a joint. This approach aims to estimate the joint and beam offset from images acquired by a vision system, coupled with an off-axis illumination laser. The method exhibits promise in estimating the offset based on evaluations using relevant images. However, concerns persist about the system’s performance under realistic welding conditions, including manipulator motion and ongoing welding. Additionally, it is important to note that the study’s presented results exclusively relate to straight linear joints.
To address sensor forerun issues, Shao et al. [23] proposed a passive vision system. This system incorporated an off-axis camera mounted to view both inside and ahead of the process interaction zone, capturing the joint line. An algorithm was employed to track the weld joint’s slope and intercept, combining the strengths of both a particle filter [24] and the Hough transform [25]. While promising results were proven, the method remains subject to the limitations inherent in the off-axis configuration. Furthermore, the system’s evaluation did not encompass curved joints.
An alternative approach, proposed by Will et al. [26], offers a solution to the sensor forerun issue through the integration of an optical coherence tomography (OCT) system for joint tracking applications. The study demonstrates the effectiveness of OCT sensing for closed square butt joints; however, it is important to note that the presented findings were limited to evaluations of straight linear joints. A significant limitation of this method was its prohibitive cost, which is still a challenge in the integration of OCT for industrial LBW applications to date.
A prior study [1] introduced an active vision system incorporating a high dynamic range camera aligned axially with the processing LB. This system employed an algorithm based on the Hough transformation [25] to extract the joint in separate images, followed by joint and beam offset estimation via a Kalman filter [16]. The filter accommodated scenarios where detection failed, such as situations involving obscured joint segments due to tack welds. To address potential misinterpretations caused by surface scratches resembling joints, the system used a joint curve prediction model defined within the robot program.
Elefante et al. [27] previously introduced a system employing a photodiode, aligned coaxially with the processing LB, to identify deviations between the LB and the joint. This method utilized wavelet analysis [28] to distinguish the evolution of a joint and beam offset from sensor noise. Although the system did not provide precise values or direction of the offset, it has the potential to complement vision-based tracking systems. Additionally, it can function as a binary go/no-go system, halting the process upon detecting a substantial offset.
In summary, none of the referenced research works has demonstrated active joint tracking during the evaluation of their proposed sensor systems, raising questions about the industrial relevance of some due to the highlighted specific limitations. Despite efforts to address joint and beam offset estimation challenges in butt joint welding, there is a notable absence, to the author’s knowledge, of studies exploring closed-loop tracking in LBW for curved closed square butt joints where commercial laser triangulation systems are prone to fail. It is important to admit that the references discussed here are not a complete list of all existing research in this area but rather a survey of unique attempts that contribute to illustrating the possibilities besides laser triangulation methods that are viable sensing solutions to the problem.
This paper introduces an integrated vision-based system tailored for robotic LBW, with a specific focus on tracking curved and closed square butt joints using a relevant sensing approach and evaluated in closed-loop during welding. The evaluation includes real-time image processing integrated with closed-loop control. The study specifically addresses the welding of a challenging workpiece of two stainless-steel plates with a curved closed square butt joint. Notably, the system incorporates signal processing techniques to manage scenarios where joint detection was not possible, such as those involving obscured joint segments due to tack welds, and in the presence of spatter, smoke, and surface contaminations like oil, dust, scratches, and oxides. To the author’s knowledge, previous research has not addressed closed-loop control of these challenging conditions for closed squared butt joint laser welding (Fig. 1).
The LBW system, sensor setup, and welding test cases are described in the following Considering the typical effective LB diameter in LBW (up to 1.12 mm), a control limit of the joint and beam offset was 0.56 mm to prevent lack of side wall fusion, especially in situations involving thin joint members. The experimental setup integrates a machine vision system seamlessly into the LBW head, coaxially aligned with the processing LB. The imaging included active off-axis LED illumination and an optical filter for disturbance attenuation. The actuation was facilitated by the tool center point (TCP) mounted on a fast linear tracking axis, allowing TCP (LB spot center) adjustment relative to the robot hand.
The LBW head manipulation was conducted using an industrial robot, ABB IRB, with an external tracking axis, and the TCP corresponded to the focused LB spot on the workpiece. The linear path repeatability and accuracy of this robot were 0.09 mm and 0.36 mm, respectively, and no specifications were given about curved paths. It can be mentioned that path accuracy is especially insufficient in curves and on TCP reorientation for any industrial robot. The laser source used was a -nm wavelength fiber laser (IPG YLR--S) and the LBW head was from Permanova Laser System AB. Fiber-delivered lasers are state of the art in industrial welding. The optical delivery fiber used was 600 µm in diameter, and by using a 160-mm focal length collimating lens and a 300-mm focal length focus lens, an LB spot diameter of 1.12 mm, and a Rayleigh length of 13.7 mm were obtained.
Argon gas was supplied to the top side of the workpiece through a small metal tube with an outlet at a position and angle to provide a laminar flow from the front end of the LB interaction zone. The argon gas was also injected into a root gas channel in the fixture for processing shielding. In front of the LBW head focus lens argon was flushed as a “gas” knife for protection against soot and spatters. An extractor tube was evacuating the smoke from the interaction zone to stabilize the process and to improve the sight for the machine vision system. The continuous wave total laser power used during the experiments was W and the welding travel speed was 12 mm/s. Fig. 2 shows the fixture and how argon gas was supplied. The set-up also includes a filler wire nozzle; however, during the experiments, no filler wire was used.
A CMOS camera (PhotonFocus DR1-D(IE)−200) with a high dynamic range (120 dB) was integrated coaxially with the LB in the LBW head as shown in Fig. 3. The camera and two externally, off-axis configured, LED illumination units were synchronously controlled by a trigger module at a frame rate of 100 frames per second. Each image had a pixel area of 640 × 300, and the corresponding spatial pixel resolution on the workpiece surface was 25 × 25 µm giving a field of view (FOV) of 16 × 9 mm since a pinhole camera model was assumed. The expected proximity of the nominal to the actual joint position implies that when the sensor view was on the nominal joint position, the actual joint position fell within the sensor’s FOV, typically within 5 mm.
Earlier studies [29] have shown that the spectral range between 400 and 500 nm causes minimal interference from the LBW spectral emissions. Also, at this spectral range, the sensitivity of the imaging sensor and the transmittance through the processing optics was sufficient. Therefore, two power LEDs with a center wavelength of 450 nm illuminated the workpiece, and a matching optical bandpass filter was placed in front of the camera. The power LEDs were only on during exposure of the imaging system. In this fashion, it was possible to obtain a higher light intensity compared to continuous illumination. This enables LED to overdrive during the given duty cycle. By using this setup, the surface texture, joint gap, and tack welds in the area in front of the melt pool were enhanced.
The LBW head underwent reorientation to align with the tangent of the joint path in the specified travel direction. This adjustment focused solely on changing the perpendicular joint and beam offset, thereby enabling joint tracking. The LB system was sufficiently insensitive to z-offset deviations due to its extended Rayleigh length. The welding head was mounted on a rapid linear tracking axis, enabling the manipulation of the TCP concerning both the robot hand and the welding path, as illustrated in Fig. 3. This configuration allowed precise control of the TCP position during welding. The angular orientation of the LBW head, actuated by a servo motor, was controlled by an input–output module linked to a measurement PC. The voltage input signal to the servo corresponded to a specific motion distance of the tracking axis. Notably, the advantage of this setup was the synchronized movement of the sensor with the TCP. This continuous realignment of the sensor coincided with the TCP aligning with the joint position, effectively compensating for any discrepancies between the welding path and the robot trajectory. The nominal robot trajectory was deliberately programmed with TCP deviations from the actual joint path.
The workpiece material was made up of 2-mm-thick sheet plates of stainless steel, SS316. The designated welding joint was a curved path with different radii in two directions, as depicted in Fig. 4. The minimal radius of curvature was 15 mm, which necessitated reorientation of the TCP. To impose challenges to the control system, the workpieces were strategically designed with tack welds obscuring joint detectability. The tack welds were systematically distributed along the joint with a fixed distance between using the nominal robot path but with a laser that was only pulsed at the positions of the tack welds. The industrial relevance of those tack welds is of course to minimize heat-induced distortions. Additionally, the plates underwent abrasion to remove oxides, resulting in minor scratches in proximity to the joint in accordance with common practice in high-end industrial applications. Deeper scratches, introduced using a knife, extended into the curved segment of the joint path, as illustrated in Fig. 4. This welding procedure yielded high-quality keyhole mode welding and a narrow seam waist. Each workpiece underwent three distinct test cases. The first test case involved no welding and employed a passive control system, capturing the nominal path and deviations in the robot path relative to the joint. The second test case excluded welding but activated joint tracking to assess its performance without welding interference. The third test case was conducted during welding with active joint tracking, offering an opportunity to evaluate the system’s performance in a realistic manufacturing scenario.
The following describes the implementation of the image processing and control algorithm. Real-time algorithms were developed in C + + . Images were captured and processed as outlined below. The estimated joint beam offset was fed into the closed-loop control algorithm. To maintain a constant and short loop time, only raw images from the camera were saved during welding, as saving images at each processing step would require excessive computer processing time.
A novel algorithm for robust joint tracing during welding of curved closed square butt joints was presented in [1]. This algorithm was the basis for the image processing algorithm presented here. A description of the algorithm is presented in the following. However, a more detailed description of the image processing algorithm is presented in [1].
The presented solution employs a multi-faceted approach, incorporating the Hough transformation [25], and a Kalman-based filter [15]. Figure 5 provides a schematic representation of the algorithm, detailing its input (raw images captured by the integrated camera) and output (joint and beam offset estimate) at each step, with subsequent sections elaborating on individual components.
The initial phase of the algorithm leverages image processing techniques to determine the joint’s position in a sequence of images, denoted as \(I\) at discrete time steps \({t}_{\text{k}}\) indexed by \(k\). This operation was executed in the region in front of the keyhole, where the joint was within the FOV. A region of interest (ROI) was selected effectively narrowing down the pixel count for subsequent processing stages. This reduction in computational load significantly enhances real-time performance. The resulting image \({I}_{\text{ROI}}\) was a square encompassing 200 × 200 pixels.
Subsequently, edge detection was employed to detect the set of pixels corresponding to the joint position within \({I}_{\text{ROI}}\). The Canny edge detection method [30] was chosen for its robustness, utilizing two threshold levels (high Ch and low Cl) to detect intensity gradients in \({I}_{\text{ROI}}\). The output was a binary image \({I}_{\text{edge}}\), which ideally holds only the edge pixels corresponding to the joint curve.
In the third step, a parametric model was fitted to the edge pixels using a specialized variant of the Hough transform. The conventional Hough transform [25] could be utilized to transform edge pixels from the image space of \({I}_{\text{edge}}\) into a parameter space based on a parametric model representing a straight line in the Hesse normal form \(r=x\text{cos}\theta +y\text{sin}\theta\), where \(r\) is the distance from the origin of \({I}_{\text{edge}}\) to the nearest point on the straight line, and \(\theta\) is the angle between the x-axis (horizontal axis) and the line connecting the origin to the said point. In our modified algorithm, a second-order polynomial \(y=a{x}^{2}+bx+c\) was employed to fit the joint curve.
Given the set of edge pixel points in \({I}_{\text{edge}}\) as \({n}_{\text{p}}\) points \(\left\{\left[{x}_{1}, {y}_{1}\right], \dots ,[{x}_{{\text{n}}_{\text{p}}}, {y}_{{\text{n}}_{\text{p}}}]\right\},\) the objective was to determine the best fit of these points to the given polynomial. The coordinates (\({x}_{\text{i}}, {y}_{\text{i}}\)) were transformed into the parameter space defined by the coefficients \(a, b,\) and \(c\). Subsequently, for each pixel (\({x}_{\text{i}}, {y}_{\text{i}}\)), all combinations in the sets \(a = \left\{{a}_{1}, \dots , {a}_{{\text{n}}_{\text{a}}}\right\}\) and \(b = \left\{\dots \dots , {b}_{{\text{n}}_{\text{b}}}\right\}\) were employed to calculate a range of potential curve segments and their corresponding \(c\) parameter, given as \({c}_{\text{j}}={y}_{\text{i}}-{a}_{\text{j}}{x}_{\text{i}}^{2}-{b}_{\text{j}}{x}_{\text{i}}\). The resulting parameter values \(\left({a}_{\text{j}}, {b}_{\text{j}}, {c}_{\text{j}}\right)\) were stored in an accumulator matrix alongside a counter counting the occurrence of each c value for a given combination of \(a\) and \(b\). This process enabled the identification of edge pixels that aligned most closely with the joint curve, given by the parameters \({\Omega }^{*}=\left\{{a}^{*},{b}^{*}, {c}^{*}\right\}\). The estimated joint position was then computed by applying this polynomial to the original image \(I\), extrapolating it to the \(x\)-position of the LB spot in the image. The output \({y}_{\text{m},\text{k}}\) is the measured joint position in image \(k\).
The largest counter value \({n}_{\text{accmax}}\) in the accumulator matrix is an index of the fit quality, reflecting how well edge pixels in \({I}_{\text{edge}}\) conform to the joint curve. This value was subject to a thresholding process \({T}_{\text{pmin}}\) to decide the presence of the joint in \({I}_{\text{edge}}\). Additionally, two more threshold parameters, \({T}_{\text{cmax}}\) and \({T}_{\text{tmax}}\), were introduced to assess the maximum positional change between consecutive frames when the joint was detected or not in the previous frame. These thresholds were used to evaluate the uncertainty of measurement. Consequently, a noise variance \({R}_{\text{k}}\) was calculated based on this thresholding and further used in the subsequent Kalman-based filter.
In scenarios where the joint position could not be directly decided and to improve the estimation accuracy, a constant position model was used in a stochastic state observer. The assumption behind this constant position model was that there should not be any notable change in joint position between two consecutive image frames. This implementation was in the form of a Kalman filter. The filter combines the information from the model of the TCP motion and measured data to calculate an estimate \({\widehat{\xi }}_{\text{k}}\) of the measured position \({y}_{\text{m},\text{k}}\). The output from the compounded algorithm was a joint and beam offset estimate \({d}_{\text{k}}= {\widehat{\xi }}_{\text{k}}-{\text{TCP}}_{\text{k}}\) at time \({t}_{\text{k}}\) as the difference between the center of the LB spot (\({\text{TCP}}_{\text{k}}\)) and the corresponding joint position \({\widehat{\xi }}_{\text{k}}\).
Joint tracking was enabled in a closed loop where a joint and beam offset \({d}_{\text{k}}\) was used to correct the position \({\text{TCP}}_{\text{k}}\); see the left part of Fig. 6. The output \({e}_{\text{k}}\) from the image algorithm execution and communication in the sensor system were modeled as a one-sample delay, \(\Delta t=10\) ms (defined by the imaging frame rate), and additive noise (white Gaussian \({v}_{\text{k}}\sim \mathcal{N}(0,{R}_{\text{k}})\)). The tracking axis dynamics were modeled as a zero-order hold discretization of a first-order system with a gain \(K\) (decided by the image detector pixel pitch, the optical setup, and the actuator gain) and a small time constant (\(T<\) 20 ms). The position adjustment direction of the tracking axis was added in the direction perpendicular to the travel direction of the nominal trajectory \({\Phi }_{\text{k}}\).
A discrete-time proportional-integral controller with anti-windup was designed for sufficient stability margins and tracking performance and to eliminate steady-state error. See the recursive algorithm in the right part of Fig. 6. Tuning the proportional gain \({K}_{\text{P}}\) and integral time constant \({T}_{\text{I}}\) resulted in a loop transfer function gain and phase margin of 5.1 dB and 46°, respectively. This design led to a closed loop bandwidth of 23 Hz. The output limits and anti-windup made sure that a joint and beam offset stayed in the FOV of the sensor. The parameter \({T}_{\text{t}}\) defines the time constant, which determines how quickly the integral windup is reset.
In total six workpieces were welded to evaluate the joint tracking system. Out of the six, there were only four evaluated during welding. The resulting bandwidth of the closed-loop design was more than sufficient compared to the welding travel speed of 12 mm/s. The closed loop response time was 13 ms which corresponds to a welding travel distance of approximately 0.16 mm. The stability margins ensured sufficient robustness in the presence of the estimation errors. Figure 7 shows representative top-view images from three different test cases. The black circle in the images shows the position of the LB spot. The top two images show images captured during robot motion with deactivated joint tracking and with no active welding. A large joint and beam offset is apparent, and it varies significantly along the joint path. The two center images in Fig. 7 show the offset with an activated joint tracking. The LB now consistently impinges on the center of the joint and manual inspection of the entire image sequence confirms that the control algorithm successfully adjusts the LB to the joint position. The bottom two images in Fig. 7 show the result of an activated joint tracking. By manual inspection of the images, it can be concluded that the control algorithm also in this test case successfully adjusts the LB to the joint position. The disturbance from the process does not significantly impact the performance of the system.
Figure 8 graphs the recorded joint and beam offsets during the welding of one representative experiment. To avoid lack of sidewall fusion the offset should typically not be more than half the LB effective diameter (0.56 mm). This was confirmed by the cross sections in Fig. 1 where an offset of less than 0.6 mm resulted in sound welding while welding with an offset of 1 mm resulted in a lack of sidewall fusion. Control limits at 0.56 mm are indicated in Fig. 8 by the horizontal dashed lines. The solid curve shows offset from welding without joint tracking. As shown, the offset reaches the largest offset of approximately 1.5 mm. The dotted curve shows the offset during welding with active joint tracking. Figure 8 shows how the system keeps the LB position within the control limits during the welding sequence. However, in one location (at \(t=32 \text{s}\)) the offset slightly exceeds the limit.
The mean joint and beam offset was not larger than 0.17 mm across the four test cases which compared with the specified control limits of 0.56 mm exhibits a convincing robustness. The joint and beam offsets have a mean absolute error of 0.11 mm and a max absolute error of 0.85 mm in the graph in Fig. 8 as an example, and the results from all four welded plates are shown in Table 1. The maximum joint and beam offsets were not larger than 1.0 mm and occurred at positions along the weld joint where a tack weld and an artificial scratch coincide where the joint exhibited a very small curve radius. However, the design of the workpieces intended to push the limits of the system, and as will be explained in more detail in the following, it can be avoided in the design of the tack weld sequence.
To find the root cause of the offsets exceeding the control limits, the locations were identified, and the images at those locations were analyzed. It was concluded that all these locations were related to tack welds positioned in sharp joint path turns. An example of this is illustrated in Fig. 9. When the sensor FOV reaches a tack weld (left part of Fig. 9), the joint curve is partly obscured, and the algorithm fails in detection. The algorithm will in that situation continue as there was no change in the offset and freeze the tracking axis until the joint is detectable again. This isolated situation results in a mismatch between the actual and estimated joint position such that the system causes a minor but unwanted offset as shown in the right part of Fig. 9. This demonstrates a weakness of the joint tracking system; however, this situation can be readily avoided by a strategy for tack weld sequencing at locations on the joint path where the curve radius is not too small.
Using machine vision in keyhole welding is challenging also due to intense plasma plume optical emissions, spatter, and smoke. Therefore, it was interesting to investigate the differences between using the control system both during welding and without welding. Figure 10 shows one test when using the joint tracking system with and without welding. As can be seen, there were no significant differences between the two situations. This confirms that the joint tracking system was robust against typical process disturbances.
Another challenge in joint tracking of closed square butt joints is the presence of scratches appearing in a similar fashion as the joint curve to the image processing algorithm. Figure 11 shows an example of such a situation. The left part of Fig. 11 shows the FOV in a position where a scratch was in conjunction with the joint to challenge the system. The search for curve segments representing the joint could potentially include and select the scratch; however, as illustrated in the right part of Fig. 11, this was successfully handled by the system. Since the joint path is modeled as a second-order polynomial, scratches could be ignored by using a threshold defining the size of the change in curve radii allowed between two consecutive image frames. Since the scratches potentially correspond to abrupt changes in the joint curve shape, and the change should be minimal between two consecutive images, they were ignored, and the tracking system continued to follow the joint curve.
Finally, the presented system should also be assessed in the context of other relevant research addressing joint tracking in robotic laser welding of curved closed square butt joints. Table 2 presents an overview of the various methods referenced here, providing an indicative comparison of their respective technology readiness levels (TRL) and their alignment with industrial requirements. Except for the method proposed in this study, none of the other techniques has been evaluated in a closed-loop configuration during welding. Instead, they have been used as joint tracers, focusing solely on estimating joints and beam offsets. Several approaches meet limitations, particularly about sensor latency, which restricts their applicability in welding intricate geometries. Some methods demonstrate insufficient speed of acquisition for most LBW applications. Others have not undergone welding-specific evaluations, leaving their performance vulnerable to realistic process interferences. Notably, one system functions solely as a detection mechanism without tracking capabilities, operating as a go-/no-go system. This system triggers a welding abortion to rectify issues before resuming the process. Several methods have only been evaluated in the less challenging case of only straight linear joints. The system developed and evaluated here was integrated into an industrial LBW head meeting requirements on physical and algorithmically robustness. It eliminated the problem of sensor forerun and the performance was evaluated during industrially relevant welding conditions.
If you want to learn more, please visit our website Laser Vision Seam Tracking System.