Skip to main content
Curve fitting is a fundamental problem in applications such as computer-aided design and is being explored in data analysis and compression in large-scale simulations.

A B-spline (or basis spline) is commonly used for smoothing the data in the curve, a process that involves subdividing the domain by knots.

Because the choice of knots significantly affects the accuracy of a B-spline approximation of a curve, many researchers have addressed this knotty” problem. Nevertheless, it remains a challenging task. Local methods may start with a few knots and iteratively add a knot seeking to find the best set, whereas others may start with a dense set of knots and iteratively remove knots until converging. Some approaches treat the knot problem as an convex optimization problem; others involve the use of machine learning; and still others use heuristic approaches to guide the knot placement. These methods are often computationally expensive, however, and produce globally suboptimal results or lead to higher errors.

What we need to be able to do is optimize both the number and the placement of the knots,” said Tom Peterka, a computer scientist in the Mathematics and Computer Science Division at Argonne National Laboratory, who has been studying the problem with colleagues from Purdue University.

The team has now developed a method for approximating a curve by a B-spline of arbitrary order. Key to the method is a feature function that characterizes the amount and spatial distribution of geometric details in the input curve by estimating its derivatives. Knots are then selected in such a way as to evenly distribute the feature contents across their intervals.

Use of derivative methods has been proposed before, but unlike those methods, our method is faster, achieves more accurate results for a wide variety of curves, and typically reduces the number of necessary knots,” said Peterka.

The new method comprises six steps. First, the researchers calculate the derivatives of the input data. Next they calculate the feature function: the higher the value, the smaller the span of knots at the parameter location. They then determine a parameter that decides the number of  knots to use: the more complicated the input data, the more knots that should be used to ensure the desired error tolerance. The feature curve is then adjusted to avoid having knots are placed too closely to each other. Using the new feature curve, the researchers determine the non-uniform knot vectors – parameter values that allow any spacing between knots and thus influence the shape of the B-spline curve. The last step involves a least squares minimization to obtain the B-spline approximation of the input data.

Fig. 1: Approximation error for all the methods, showing that the proposed method (“Our method”) achieves the lowest approximation. The dataset was a large, randomly generated nonuniform B-spline dataset representative of the complicated and unpredictable datasets found in real-world applications.
 

The team compared their knot placement method with five state-of-the-art methods. Each method was run 15 times with different numbers of knots and with datasets having a range of complexity. The test datasets comprised a combination of synthetic and actual scientific data.

The experimental results showed that the new method offers several benefits. It has an approximation accuracy comparable to or higher than that of the other methods tested (see Fig. 1). Moreover, fewer knots are needed for various 1D input data and parametric curves in two or more dimensions. The new method is also computationally inexpensive, with execution times on a par with the fastest methods tested.

These benefits are significant, but we haven’t solved the Gordian knot problem completely yet,” Peterka admitted, referring to an ancient legend involving an intricately tied knot that seemed impossible to unravel. Alexander the Great allegedly just sliced through it with his sword. Alexander’s was a simple solution – though perhaps he cheated a bit. We still face some challenges,” Peterka said.

One such challenge is the presence of noisy” data. Currently, the researchers use noise-free datasets for the input; but noise arises from calculations of higher-order derivatives, so new approaches are needed. Another challenge is to extend the method to multidimensional data.

Some ancient historians who wanted a more ethical explanation of Alexander’s success say that he realized that removing the lynchpin would expose the ends of the rope, thus allowing him to loosen the knots. We need to figure out which are the lynchpins for our problem,” Peterka said.

For the paper detailing this new approach, see R. Yeh, Y. S. G. Nashed, T. Peterka, and X. Tricoche, Fast Automatic Knot Placement Method for Accurate  B-Spline Curve Fitting,” Computer-Aided Design 128, 102905, November 2020.