Concepedia

Concept

Approximation theory

Parents

119.7K

Publications

8M

Citations

127.3K

Authors

10.8K

Institutions

Table of Contents

Overview

Definition and Importance

is a branch of that focuses on how functions can be effectively approximated by simpler functions, such as polynomials, finite elements, or Fourier series. This field is crucial for quantitatively characterizing the errors that arise from such approximations, which can vary depending on the specific application at hand.[3.1] The foundational work in approximation theory includes significant contributions from mathematicians such as P.L. Chebyshev, who explored the best uniform approximation of functions by polynomials, and K. Weierstrass, who demonstrated the theoretical possibility of approximating continuous functions on finite intervals.[4.1] These developments laid the groundwork for the study of function approximation, which is essential in various areas of , particularly in the approximation of (PDEs).[3.1] Moreover, approximation theory encompasses a range of techniques and concepts, including interpolation, which serves as a vital tool in numerical analysis. Interpolation allows for the replacement of complex functions with simpler polynomial forms, facilitating the development of for more sophisticated problems.[6.1] The choice of basic building components for approximation, such as algebraic and trigonometric polynomials, splines, and , is often dictated by the specific characteristics of the problem being studied, such as the underlying in solving PDEs.[5.1]

Applications in Various Fields

Approximation theory finds extensive applications across various fields, significantly enhancing performance and efficiency in numerous domains. In , techniques such as splines and wavelets are pivotal. Spline curves, including cubic, Bezier, and B-splines, are utilized to render smooth curves and surfaces, allowing for efficient representation of complex shapes. However, challenges arise in dynamically re-stroking these curves based on zoom levels and , necessitating robust algorithms to manage these adjustments effectively.[11.1] The de Boor algorithm, which facilitates the handling of non-uniform B-splines, exemplifies the flexibility required in density and access patterns, particularly in GPU implementations.[12.1] Wavelets, rooted in approximation theory, have also been applied to various problems in computer graphics, including image editing, compression, and automatic level-of-detail control for rendering curves and surfaces.[14.1] These applications demonstrate the versatility of wavelets in managing complex graphical data and enhancing visual fidelity. In the field of numerical analysis, polynomial interpolation is a vital technique for approximating functions. Various methods are available, each addressing different types of data and specific challenges. For example, piecewise polynomials serve as a practical alternative to the difficulties associated with high-degree polynomial interpolation, as they allow for the fitting of a large number of data points using low-degree polynomials.[23.1] This approach effectively addresses the theoretical and practical challenges posed by high-degree polynomials. Additionally, the Newton interpolating polynomial is often considered the best choice due to its flexibility; it allows for the addition and interpolation of new data pairs by simply incorporating an additional term into the existing polynomial.[22.1] The selection of an appropriate interpolation technique is crucial, as it can significantly influence the accuracy of estimations. Polynomial interpolation provides smooth approximations, while methods like nearest-neighbour offer quick estimations. Furthermore, techniques such as Logarithmic and Lagrange interpolation are designed for more complex that require sophisticated estimation methods.[24.1] Therefore, careful consideration of the interpolation method is essential to ensure reliable results. Furthermore, approximation theory is integral to control theory, particularly in , where linear approximations are necessary to model the inherently non-linear dynamics of .[25.1] This application underscores the practical relevance of approximation techniques in real-world scenarios, bridging theoretical concepts with tangible outcomes.

History

Early Developments

The early developments of approximation theory can be traced back to the late 18th century, particularly through the work of Leonhard Euler. Euler's investigations in , specifically his efforts in 1777 to minimize distance errors in maps of Russia, marked a significant starting point for the mathematical techniques that would later be foundational in approximation theory.[46.1] His methodologies laid the groundwork for subsequent advancements in the field, including the exploration of algebraic and trigonometric polynomials and rational fractions, which became a focal point in the third stage of approximation theory's development.[42.1] Following Euler, the work of Pierre-Simon Laplace in 1843 further contributed to the evolution of approximation techniques by addressing the problem of finding the best ellipsoid to represent the Earth.[60.1] This period saw the emergence of two distinct schools of thought in approximation theory: the Eastern or Russian group, which primarily employed , and the Western approaches that incorporated various analytical techniques.[61.1] The early 20th century marked a significant period in the development of approximation theory, largely due to the contributions of Sergei Natanovich Bernstein. Bernstein, a Ukrainian and Soviet mathematician, is renowned for his work on polynomial approximation, particularly through the introduction of Bernstein polynomials. These polynomials are expressed as linear combinations of Bernstein basis polynomials and were first utilized by Bernstein in a constructive proof for the Weierstrass approximation theorem, which states that any continuous function can be uniformly approximated by polynomials.[49.1] Furthermore, Bernstein functions have found applications across various fields of mathematics, including , , , , and , often under different definitions and names.[68.1] This broad applicability underscores the lasting impact of Bernstein's work on the theoretical framework of and its relevance in contemporary mathematical research.

Divergent Schools of Thought

Andrei N. Kolmogorov's creative work is exceptionally wide-ranging, addressing numerous fields including approximation theory, , functional analysis, , , , , superposition of functions, and . His studies on trigonometric and orthogonal series, as well as the theory of measure and integral, have allowed him to solve many conceptual and fundamental problems in these areas.[52.1] Kolmogorov's foundational contributions to the mathematical theory of probability, which involves the study of probability measures on measurable spaces, have established a rich theoretical framework that has been further developed by many researchers.[53.1] His work laid the groundwork for the development of the Vapnik-Chervonenkis (VC) Theory, introduced by Vladimir Vapnik and Alexey Chervonenkis in the 1970s. The VC Dimension, which measures the capacity of a hypothesis space, is a direct extension of Kolmogorov's ideas and has become a cornerstone of .[54.1] Moreover, while contemporary neural networks have evolved beyond the direct application of the Kolmogorov-Arnold representation theorem, this theorem continues to provide essential insights into the capabilities of function approximation.[65.1] Recent advancements have seen the introduction of deep learning-based high-order spatial approximations for solutions to high-dimensional Kolmogorov equations, demonstrating the ongoing relevance of Kolmogorov's concepts in addressing modern computational challenges.[66.1] Additionally, the development of Kolmogorov-Arnold Networks (KAN) exemplifies a novel approach inspired by Kolmogorov's work. KANs utilize learnable, spline-parameterized functions instead of traditional fixed activation functions, showcasing a significant departure from conventional neural network and highlighting the innovative applications of Kolmogorov's theories in contemporary computational techniques.[67.1]

Recent Advancements

Innovations in Approximation Methods

Recent advancements in approximation theory have increasingly highlighted its interdisciplinary , particularly through practical applications that have emerged from recent research. This topical collection is centered on these advancements and originated from the conference "Approximation Theory and Beyond," which took place from May 15 to 18, 2023, at Vanderbilt University in Nashville, Tennessee.[77.1] The field encompasses a variety of techniques, including , statistical approximation, fuzzy approximation, approximation in the complex plane, best approximation, and interpolation methods.[78.1] These topics reflect the ongoing evolution and depth of exploration within approximation theory, underscoring its relevance in addressing complex problems across various disciplines. One notable development is the increasing interdisciplinary nature of approximation theory, which has been highlighted in recent conferences such as "Approximation Theory and Beyond," held in May 2023 at Vanderbilt University.[94.1] These gatherings have fostered discussions on the latest trends and challenges in the field, encouraging collaboration among researchers from diverse backgrounds. Moreover, the International Conference on Spectral and Approximation Theory (ICSAT-2023), which took place in Kerala, India, showcased a collection of recent developments in both spectral and approximation theory.[95.1] This conference emphasized the importance of these theories in advancing numerical methods and algorithms, particularly in and . Recent years have witnessed a growing interest in various aspects of approximation theory, driven by the increasing complexity of that necessitate computer calculations and the development of its theoretical foundations. This branch of mathematics has broad and important applications across many areas, including functional analysis.[91.1] Furthermore, approximation theory is a key component of contemporary algorithms used in computational science and engineering, playing a crucial role in the of efficient computational algorithms for representing real-life data and solving numerically.[93.1] Notably, there have been significant advancements in theory, particularly concerning linear positive operators, with state-of-the- addressing new problems and extending existing results in this area.[80.1] The practical implications of these advancements are evident in various fields, including engineering, where are employed to model systems and processes effectively. For instance, piecewise linear and are increasingly utilized to tackle mathematical modeling challenges across engineering and domains.[92.1] As a result, the innovations in approximation methods not only reflect theoretical progress but also translate into tangible improvements in engineering practices and applications.

Applications in Modern Technology

Approximation theory has found significant applications in modern , particularly in fields such as robotics and numerical analysis. In robotics, control theory, which is fundamentally based on approximation methods, is utilized to manage and direct systems. Although control theory is primarily developed for , the inherent non-linearity of necessitates the use of linear approximations to effectively apply these theories in robotic applications.[25.1] Moreover, approximation theory plays a crucial role in the analysis of numerical methods, especially in the approximation of partial differential equations (PDEs). This branch of mathematics focuses on approximating complex functions using simpler functions, such as polynomials, finite elements, or Fourier series, thereby facilitating the solution of intricate mathematical problems encountered in various technological contexts.[85.1] In educational settings, approximation theory is also integrated into pedagogical practices. employ "approximations of practice" to engage novice teachers in simulations that mimic aspects of real teaching. These approximations provide a supportive environment for novices to rehearse and enact components of teaching, thereby enhancing their understanding of the complexities involved in .[83.1] This approach, rooted in the theories of educational pioneers like David Kolb and John Dewey, emphasizes the importance of hands-on activities that foster and the application of theoretical concepts.[84.1]

Key Concepts

Polynomial and Rational Approximations

Polynomial approximations have long been a fundamental aspect of approximation theory, with significant contributions from mathematicians such as Chebyshev and Weierstrass. Weierstrass, in particular, is noted for his critical examination of Chebyshev's methods and his preference for employing Jacobi's theory of , which provided deeper insights into approximation problems.[131.1] The work of these mathematicians laid the groundwork for modern approximation theory, influencing ongoing research and applications in the field.[133.1] Spline approximation is a numerical method used to construct piecewise polynomial functions, known as splines, which can closely approximate a given set of data points. This technique is particularly useful in interpolation and smoothing of data, allowing for more flexible and accurate modeling compared to traditional polynomial fitting methods.[141.1] In spline interpolation, instead of fitting a single high-degree polynomial to all data points, low-degree polynomials are fitted to small subsets of the values. For instance, this can involve fitting multiple cubic polynomials to different segments of the data.[143.1] While high-degree polynomial fitting can be efficient, it may lead to significant errors due to erratic oscillations, especially near the endpoints of the interval.[145.1] Thus, spline interpolation offers a robust alternative that mitigates these issues, enhancing the overall accuracy of numerical approximations. The connection between spline approximation and the numerical solution of partial differential equations (PDEs) is noteworthy, as spline methods are often utilized within the finite-element method framework. This approach employs piecewise-polynomial functions as basis functions, enhancing the accuracy and efficiency of numerical solutions to PDEs.[142.1] Furthermore, spline approximations have been shown to provide comparable orders of approximation to best approximation splines, making them a preferred choice in many scenarios.[157.1] When comparing polynomial and spline approximations, splines generally offer distinct advantages, particularly in computational realizations and in cases where the approximation needs to be smooth at the joins between different polynomial segments.[156.1] For instance, while high-degree polynomials can lead to large errors due to oscillations, cubic spline interpolation tends to yield more stable approximations across the entire interval.[158.1] In practical applications, splines are often favored when the curve must pass through exact data points, whereas smoothing splines may be employed for to better understand underlying trends.[159.1]

Error Analysis in Approximations

is a fundamental aspect of approximation theory, focusing on how well functions can be approximated by simpler functions and the of errors that arise from such approximations. The primary concern of approximation theory is the approximation of functions, which involves determining the best methods for achieving this goal while minimizing error.[122.1] Historically, the foundations of approximation theory were established by P.L. Chebyshev and K. Weierstrass, who laid the groundwork for understanding the best uniform approximation of functions by polynomials and demonstrated the feasibility of approximating continuous functions on finite intervals.[4.1] This historical context highlights the importance of error analysis in evaluating the effectiveness of approximation methods. Approximation theory is a branch of mathematics that focuses on the approximation of complex functions by simpler ones, and it has numerous applications in engineering, where it is employed to simplify complex models and enhance .[128.1] Over the years, the theory has significantly evolved, incorporating various mathematical tools and concepts, including splines and wavelets, which have broadened its applicability across different scientific domains.[125.1] expansions, in particular, are a powerful tool for constructing adaptive approximations, finding applications in fields ranging from to approximation theory.[140.1] Additionally, wavelets are capable of separating fine details within a signal, functioning similarly to an enhanced Fourier transform, which further illustrates their utility in improving the performance of models in various applications.[151.1] The integration of approximation theory, particularly through wavelets and splines, enhances the of by providing customizable and expressive frameworks. In this context, errors can arise in two key areas: when approximating the underlying data relationships with a model, which pertains to predictive accuracy, and when interpreting what the model has learned, which relates to descriptive accuracy.[138.1] Post hoc interpretability methods aim to extract information from a trained model without affecting its predictive accuracy, yet different interpretability methods can yield varying levels of descriptive accuracy, sometimes at the cost of predictive performance.[138.1] Therefore, a significant challenge for researchers is to develop new that achieve higher predictive accuracy while maintaining high descriptive accuracy and relevance.[138.1]

Practical Applications

Engineering and Design

Approximation theory plays a crucial role in engineering and design by providing methods to simplify complex models, thereby enhancing computational efficiency. One of the foundational tools in this field is the Taylor Series, which allows for the approximation of functions through an infinite sum of terms derived from the function's derivatives at a single point. This method is essential for applying mathematical principles to practical engineering problems.[167.1] In engineering , approximation methods are categorized into fundamental and advanced topics, including Dimensional Analysis, , and of differential equations. These methods are emphasized for their utility in everyday calculations, demonstrating their practical significance in engineering applications.[168.1] Approximation theory is a branch of mathematics focused on simplifying complex functions into more manageable forms, which is particularly valuable in engineering applications. This theory aids in making complex models more computationally efficient, thereby enhancing the accuracy and efficiency of various engineering processes.[128.1] Specific examples of its application can be found in and , particularly in areas such as gear and medical optics, where approximation techniques are employed to tackle open problems and improve outcomes.[176.1] The application of approximation theory in mathematical is essential for addressing complex challenges in engineering. It specifically aids in the reduction of phenomena such as the Runge and Gibbs phenomena and addresses difficulties that arise when studying models dependent on the highly nonlinear behavior of systems of partial differential equations (PDEs).[198.1] The primary goal of employing mathematically rigorous models is to understand the physics of processes and to incorporate all relevant phenomena to accurately estimate their effects.[198.1] This approach serves as a significant tool in engineering, highlighting its importance in the design and analysis of intricate systems.[198.1] Furthermore, the development of new methods for approximating piecewise linear and generalized functions has been pivotal in solving mathematical modeling problems across diverse fields, from engineering to . This adaptability underscores the versatility of approximation theory in addressing a wide range of practical challenges.[129.1]

Computer Science and Graphics

Approximation theory is essential in the fields of and , particularly in image processing. This theory underpins the development of algorithms that enhance various tasks, such as image , resizing, and . Local polynomial approximation techniques are particularly significant in these processes, as they utilize polynomials of two variables to improve the performance of algorithms in image processing, including tasks like image convolution and filtration.[169.1] These techniques contribute to effective , which is a critical aspect of image processing. Moreover, the intersection of confidence interval rules is combined with local polynomial approximation to address complex nonlinear estimation problems, further enhancing the capabilities of image processing algorithms.[169.1] Approximation theory is a highly active area of research due to its significant applications across various scientific disciplines. It plays a crucial role in both , such as the constructive approximation of functions and the solutions of partial and integral equations, and , including computer-aided geometric design and image processing.[164.1] Recent developments in approximation theory encompass a range of topics, including constructive , the theory of splines, spline wavelets, polynomial and trigonometric wavelets, and interpolation theory.[165.1] The growing interest in this field is largely driven by the increasing complexity of mathematical models that necessitate computer calculations, highlighting the broad and important applications of approximation theory in many areas of mathematics, including functional analysis.[166.1] Approximation theory plays a crucial role in the development and understanding of machine learning models, providing the mathematical foundation for how well a model can approximate a function based on a finite set of data points.[184.1] In the context of tasks such as classification and regression, approximation techniques are essential for learning from data, as many methods approximate a function or a mapping between inputs and outputs through a learning algorithm.[185.1] The Universal Approximation Theorem (UAT) further emphasizes that a suitably designed neural network can approximate any function, given sufficient and appropriate .[186.1] Additionally, the use of polynomials for approximating multidimensional functions has opened new opportunities in various applications, including in image recognition and signal processing.[187.1] This broad applicability highlights the enormous technical potential of approximation theory across diverse tasks, such as , classification, video synthesis, and restoration.[188.1]

Theoretical Foundations

Special Functions and q-Calculus

Recent and significant developments in approximation theory, special functions, and q-calculus have been extensively discussed and analyzed, highlighting their applications in mathematics, engineering, and .[238.1] Approximation theory is defined as the branch of mathematics that studies the process of approximating general functions by simpler functions, such as polynomials, finite elements, or Fourier series.[239.1] This area of mathematics is extremely important and encompasses various subjects, including least-squares methods, minimal projections, and , which are essential for addressing complex problems in diverse fields.[239.1] q-Calculus is a significant area within approximation theory, which is a branch of that focuses on how functions can be approximated with simpler, more manageable functions. This field plays a crucial role in various applications, including numerical analysis, computer science, and engineering, where exact solutions are often impractical or impossible to obtain.[240.1] Recent developments in approximation theory, special functions, and q-calculus have been extensively discussed and analyzed, enriching the understanding of current research problems and theories in both pure and applied research.[238.1]

In this section:

Sources:

Challenges And Future Directions

Limitations of Current Approaches

Current approaches in Approximation Theory encounter notable challenges, especially when dealing with high-dimensional spaces and non-linear functions. The curse of dimensionality significantly complicates the search for effective approximations, as the difficulty of finding solutions increases with the number of dimensions.[242.1] This issue is particularly pronounced in fields such as machine learning and data analysis, where the complexity of non-linear regression models escalates exponentially with dimensionality.[249.1] Consequently, constructing solutions over the entire domain of these functions often necessitates the application of specialized mathematical methods to effectively implement theoretical concepts.[243.1] In , for instance, algorithms are adversely affected by the curse of dimensionality, leading to exponentially high sample complexity in large-scale problems.[248.1] Current approaches in Approximation Theory demonstrate a notable towards theorems and methods applicable to functions, which are frequently encountered in practical approximation scenarios. This emphasis often neglects the complexities associated with functions that are near , which pose unique theoretical challenges.[244.1] While interpolation by polynomials and rational functions for discontinuous functions is a well-studied historical approach, it is crucial to acknowledge the limitations of these methods, as illustrated by the well-known Runge and Gibbs effects.[258.1] Additionally, interpolation by kernels, particularly , has been recognized as a suitable technique for addressing high-dimensional scattered data problems.[258.1] However, the prevailing bias in the field continues to impede the advancement of more effective methods for approximating discontinuous functions.[257.1] In addition, while polynomial and spline approximations are commonly employed, they also struggle with the curse of dimensionality. For instance, approximating multivariate functions in high dimensions becomes increasingly infeasible, as the computational burden rises with the complexity of the problem and the number of input variables.[276.1] Although B-splines offer some advantages due to their compact support and adaptability, they still face limitations when applied to high-dimensional data.[277.1] The future of Approximation Theory is characterized by promising developments and ongoing research aimed at enhancing existing methods and creating new techniques. Current investigations are particularly focused on constructive multivariate approximation, spline theory, and the application of wavelets, which are pivotal in addressing complex problems across various fields, including numerical analysis and machine learning.[246.1] Recent advancements in kernel-based methods have gained traction in machine learning, particularly for high-dimensional data challenges. These methods utilize kernel functions to transform data into high-dimensional spaces, with a significant emphasis on improving and efficiency.[252.1] The intersection of Approximation Theory and machine learning has led to innovative approaches in reinforcement learning, where function approximation plays a crucial role in the prediction and control of Markov decision processes.[254.1] Moreover, the challenges posed by high-dimensional data, such as the large p/small n problem, have spurred research initiatives like the 2008 programme at the Isaac Newton Institute for Mathematical Sciences, which aimed to develop and methods for complex data.[255.1] This backdrop has catalyzed further exploration into effective techniques for approximating nonlinear functions in high-dimensional spaces, with neural networks emerging as a viable solution to overcome the curse of dimensionality.[269.1] In practical applications, advancements in constructive multivariate approximation algorithms, particularly those based on sigmoidal functions, are proving essential in neurocomputing processes related to high-dimensional data.[260.1] The ability to approximate multivariate functions is increasingly relevant across disciplines such as computer science, engineering, and physics, where problems often involve fitting hypersurfaces in four or more dimensions.[261.1]

References

sciencedirect.com favicon

sciencedirect

https://www.sciencedirect.com/topics/mathematics/approximation-theory

[3] Approximation Theory - an overview | ScienceDirect Topics Publisher Summary. Approximation theory is the branch of mathematics which studies the process of approximating general functions by simple functions such as polynomials, finite elements or Fourier series. It therefore plays a central role in the analysis of numerical methods, in particular approximation of PDE's.

encyclopediaofmath.org favicon

encyclopediaofmath

https://encyclopediaofmath.org/wiki/Approximation_theory

[4] Approximation theory - Encyclopedia of Mathematics The main contents of approximation theory concerns the approximation of functions. Its foundations are laid by the work of P.L. Chebyshev (1854-1859) on best uniform approximation of functions by polynomials and by K. Weierstrass, who in 1885 established that in principle it is possible to approximate a continuous function on a finite

people.math.sc.edu favicon

sc

https://people.math.sc.edu/sharpley/math725/lectures.html

[5] Lecture outline for Approximation Theory - University of South Carolina Introduction to Concepts of Approximation Theory Week 1 Lecture 1 (8/18): Main Issues of Approximation Theory: Choice of basic building components used for approximation: conditioned on problem being studied, e.g the physics in solving PDE, ... Overview description of algebraic and trigonometric polynomials, splines, wavelets.

personal.math.vt.edu favicon

vt

https://personal.math.vt.edu/embree/math5466/lecture12.pdf

[6] PDF Approximation Theory lecture 12: Introduction to Approximation Theory Interpolation is an invaluable toolin numerical analysis: it provides an easy way to replace a complicated function by a polyno-mial (or piecewise polynomial), and, at least as importantly, it provides a mechanism for developing numerical algorithms for more sophis-ticated

computergraphics.stackexchange.com favicon

stackexchange

https://computergraphics.stackexchange.com/questions/12259/rendering-splines-on-gpu

[11] Rendering splines on GPU - Computer Graphics Stack Exchange We have an application which needs to render spline curves (cubic, bezier, b-spline etc.). We currently have working algorithms in C to stroke the control points of these curves into line strips. The issue we are running to is the need to constantly re-stroke the curves based on zoom and how much of the curve is visible.

student.cs.uwaterloo.ca favicon

uwaterloo

https://student.cs.uwaterloo.ca/~cs779/pnotes.pdf

[12] PDF The de Boor algorithm allows you to handle non-uniform B-splines, and gives more exibility in the sampling density, etc. Further, there may be memory access patterns that favor one algorithm over the other, and other issues likely arise if making GPU implementations of the algorithms.

sites.fas.harvard.edu favicon

harvard

https://sites.fas.harvard.edu/~cs278/papers/stollnitz95wavelets.pdf

[14] PDF Although wavelets have their roots in approximation theory and signal processing , they have recently been applied to many problems in computer graphics. These graphics applications in-clude image editing , image compression , and image query-ing ; automatic level-of-detail control for editing and render-ing curves and surfaces ; surface reconstruction from con

home.iitk.ac.in favicon

iitk

https://home.iitk.ac.in/~pranab/ESO208/rajesh/03-04/interpolation.pdf

[22] PDF Using the Newton interpolating polynomial is usually the best choice. It has the advantage that data pairs can be added and interpolated by merely adding one additional term to the previous interpolat-ing polynomial.

relate.cs.illinois.edu favicon

illinois

https://relate.cs.illinois.edu/course/cs450-s17/file-version/108b563f0496870001ffa4a77b504ff4d6ef57d1/lecture-notes/lec17.pdf

[23] PDF Piecewise polynomials provide alternative to practical and theoretical difficulties with high-degree polynomial interpolation Main advantage of piecewise polynomial interpolation is that large number of data points can be fit with low-degree polynomials

interpolationcalculator.com favicon

interpolationcalculator

https://interpolationcalculator.com/types-of-interpolation/

[24] Types Of Interpolation: A Complete Overview Polynomial interpolation allows for smooth approximation, while nearest-neighbour is a simple method for fast estimation, Logarithmic and Lagrange's interpolation are suited for specific data types requiring more complexity to estimate. Choosing the appropriate interpolation technique can determine whether your estimations are accurate or flawed.

math.stackexchange.com favicon

stackexchange

https://math.stackexchange.com/questions/3457416/what-are-some-applications-of-linear-approximation-in-the-real-world

[25] What are some applications of linear approximation in the real world? Besides Pedro answer, I would add application in control theory. We use control theory in robotics application as instance. The theory is developed for linear systems, but mechanical modeling is very non-linear, therefore, it is necessary use linear approximations for robotics.

researchgate.net favicon

researchgate

https://www.researchgate.net/publication/336018019_The_development_of_approximation_theory_and_some_proposed_applications

[42] (PDF) The development of approximation theory and some proposed ... The third stage in the development of approximation theory focused on approximative possibilities of the algebraic and trigonometric polynomials and rational fractions.

old.maa.org favicon

maa

https://old.maa.org/press/maa-reviews/the-history-of-approximation-theory-from-euler-to-bernstein

[46] The History of Approximation Theory: From Euler to Bernstein This book aims to tell the historical evolution of the methods and results of approximation theory, starting from the work of Euler in 1777 on minimizing distance errors in maps of Russia and of Laplace in 1843 on finding the best ellipsoid for the earth, and ending with the the work of Bernstein.

en.wikipedia.org favicon

wikipedia

https://en.wikipedia.org/wiki/Bernstein_polynomial

[49] Bernstein polynomial - Wikipedia Bernstein polynomials approximating a curve. In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial expressed as a linear combination of Bernstein basis polynomials.The idea is named after mathematician Sergei Natanovich Bernstein.. Polynomials in Bernstein form were first used by Bernstein in a constructive proof for the Weierstrass approximation theorem.

link.springer.com favicon

springer

https://link.springer.com/book/10.1007/978-94-011-2260-3

[52] Selected Works II: Probability Theory and Mathematical Statistics ... The creative work of Andrei N. Kolmogorov is exceptionally wide-ranging. In his studies on trigonometric and orthogonal series, the theory of measure and inte­gral, mathematical logic, approximation theory, geometry, topology, functional analysis, classical mechanics, ergodic theory, superposition of functions, and in­ formation theory, he solved many conceptual and fundamental problems and

probabilityandfinance.com favicon

probabilityandfinance

https://probabilityandfinance.com/articles/06.pdf

[53] PDF According to Grundbegriffe, the mathematical theory of probability studies probability measures, i.e., measures P on a measurable space (Ω;F) such that P(Ω) = 1. An event is just a set E 2 F and its probability is P(E). On this simple foundation Kolmogorov built a rich mathematical theory which has been developed by many researchers and is

aivips.org favicon

aivips

https://aivips.org/andrey-kolmogorov/

[54] Andrey Kolmogorov: Foundations of AI Through Mathematics Influence on Vapnik-Chervonenkis (VC) Dimension and Modern Learning Theory. Kolmogorov's foundational work in statistical learning also played a role in the development of Vapnik-Chervonenkis (VC) Theory, introduced by Vladimir Vapnik and Alexey Chervonenkis in the 1970s. The VC Dimension is a measure of the capacity of a hypothesis space

amazon.com favicon

amazon

https://www.amazon.com/History-Approximation-Theory-Euler-Bernstein/dp/0817643532

[60] The History of Approximation Theory: From Euler to Bernstein From the reviews: "This book aims to tell the historical evolution of the methods and results of approximation theory, starting from the work of Euler in 1777 on minimizing distance errors in maps of Russia, and of Laplace in 1843 on finding the best ellipsoid for the earth, and ending with the work of Bernstein.

digitalcommons.memphis.edu favicon

memphis

https://digitalcommons.memphis.edu/facpubs/5887/

[61] The history of approximation theory: From euler to bernstein The problem of approximating a given quantity is one of the oldest challenges faced by mathematicians. Its increasing importance in contemporary mathematics has created an entirely new area known as Approximation Theory. The modern theory was initially developed along two divergent schools of thought: the Eastern or Russian group, employing almost exclusively algebraic methods, was headed by

medium.com favicon

medium

https://medium.com/@theagipodcast/understanding-the-kolmogorov-arnold-network-52e7232f8749

[65] Understanding the Kolmogorov-Arnold Network - Medium While modern neural networks have moved beyond the direct use of the Kolmogorov-Arnold representation, the theorem still provides foundational insights into the function approximation capabilities

link.springer.com favicon

springer

https://link.springer.com/article/10.1007/s10614-023-10476-2

[66] Deep Kusuoka Approximation: High-Order Spatial Approximation for ... The paper introduces a new deep learning-based high-order spatial approximation for a solution of a high-dimensional Kolmogorov equation where the initial condition is only assumed to be a continuous function and the condition on the vector fields associated with the differential operator is very general, i.e. weaker than Hörmander's hypoelliptic condition. In particular, the deep learning

arxiv.org favicon

arxiv

https://arxiv.org/abs/2411.06078

[67] [2411.06078] A Survey on Kolmogorov-Arnold Network - arXiv.org This systematic review explores the theoretical foundations, evolution, applications, and future potential of Kolmogorov-Arnold Networks (KAN), a neural network model inspired by the Kolmogorov-Arnold representation theorem. KANs distinguish themselves from traditional neural networks by using learnable, spline-parameterized functions instead of fixed activation functions, allowing for

degruyter.com favicon

degruyter

https://www.degruyter.com/document/doi/10.1515/9783110269338/html

[68] Bernstein Functions - De Gruyter Bernstein functions appear in various fields of mathematics, e.g. probability theory, potential theory, operator theory, functional analysis and complex analysis - often with different definitions and under different names. Among the synonyms are `Laplace exponent' instead of Bernstein function, and complete Bernstein functions are sometimes called `Pick functions', `Nevanlinna functions' or

link.springer.com favicon

springer

https://link.springer.com/collections/dijjagacdh

[77] Approximation Theory and Beyond (In honor of Larry Schumaker's 80th ... Over time, Approximation Theory has increasingly become interdisciplinary. This topical collection is centered on recent advancements in approximation theory and its practical applications. It originated from the conference, "Approximation Theory and Beyond," held from May 15 to 18, 2023, at Vanderbilt University in Nashville, Tennessee.

sciencedirect.com favicon

sciencedirect

https://www.sciencedirect.com/special-issue/316945/advances-in-approximation-theory-and-special-functions-contributions-from-the-atsf-2024-conference-8th-series

[78] Systems and Soft Computing | ScienceDirect.com by Elsevier - Systems ... The topics cover the recent advances in the fields of approximation theory and special functions. We will focus on the following: Approximation Theory: Classical approximation, statistical approximation, fuzzy approximation, approximation in the complex plane, best approximation, interpolation,

link.springer.com favicon

springer

https://link.springer.com/book/10.1007/978-3-319-92165-5

[80] Recent Advances in Constructive Approximation Theory This book presents an in-depth study on advances in constructive approximation theory with recent problems on linear positive operators. State-of-the-art research in constructive approximation is treated with extensions to approximation results on linear positive operators in a post quantum and bivariate setting.

education.eng.macam.ac.il favicon

macam

https://education.eng.macam.ac.il/article/5250

[83] Approximations of practice as a framework for understanding ... This work is grounded in a theory of practice-based teacher education, in which teachers learn from engaging in and reflecting on their practice in several ways, including engagement in approximations of practice (Grossman et al., 2009). Approximations "include opportunities to rehearse and enact discrete components of complex practice in settings of reduced complexity" (p. 283). The

2sigmaschools.com favicon

2sigmaschools

https://2sigmaschools.com/blogs/real-world-applications-bridging-the-gap-between-theory-and-practice/

[84] Real-world Applications: Bridging the Gap Between Theory and Practice ... Through hands-on activities designed to stimulate introspection, critical thinking, and the application of theoretical concepts, experiential learning is a dynamic method of teaching. Experiential learning, which was first introduced by educational theorists such as David Kolb and John Dewey, highlights the iterative process of learning via experience, introspection, conceptualization, and

sciencedirect.com favicon

sciencedirect

https://www.sciencedirect.com/topics/mathematics/approximation-theory

[85] Approximation Theory - an overview | ScienceDirect Topics Approximation theory is the branch of mathematics which studies the process of approximating general functions by simple functions such as polynomials, finite elements or Fourier series. It therefore plays a central role in the analysis of numerical methods, in particular approximation of PDE's.

mdpi.com favicon

mdpi

https://www.mdpi.com/books/reprint/6657-approximation-theory-and-related-applications

[91] Approximation Theory and Related Applications | MDPI Books In recent years, we have seen a growing interest in various aspects of approximation theory. This happened due to the increasing complexity of mathematical models that require computer calculations and the development of the theoretical foundations of the approximation theory. Approximation theory has broad and important applications in many areas of mathematics, including functional analysis

sciencedirect.com favicon

sciencedirect

https://www.sciencedirect.com/book/9780443291418/approximation-theory-and-applications

[92] Approximation Theory and Applications - ScienceDirect Approximation Theory and Applications: Piecewise Linear and Generalized Functions presents the main provisions of approximation theory, and considers existing and new methods for approximating piecewise linear and generalized functions, widely used to solve problems related to mathematical modeling of systems, processes, and phenomena in fields ranging from engineering to economics.

maths.manchester.ac.uk favicon

manchester

https://www.maths.manchester.ac.uk/research/expertise/approximation-theory/

[93] Approximation theory - Department of Mathematics - The University of ... Approximation theory is a key component of contemporary algorithms used in computational science and engineering. The mathematical theory underlying approximation has a crucial role in the design of efficient computational algorithms for representing real-life data and the numerical solution of differential equations, by providing insight into

link.springer.com favicon

springer

https://link.springer.com/collections/dijjagacdh

[94] Approximation Theory and Beyond (In honor of Larry Schumaker's 80th ... Over time, Approximation Theory has increasingly become interdisciplinary. This topical collection is centered on recent advancements in approximation theory and its practical applications. It originated from the conference, "Approximation Theory and Beyond," held from May 15 to 18, 2023, at Vanderbilt University in Nashville, Tennessee.

link.springer.com favicon

springer

https://link.springer.com/book/9783031902390

[95] Recent Developments in Spectral and Approximation Theory This book is a collection of recent developments in spectral and approximation theory. The results collected here were presented at the International Conference on Spectral and Approximation Theory (ICSAT-2023) which took place at Cochin University of Science and Technology in Kerala, India. The conference ICSAT-2023 focuses on two significant

en.wikipedia.org favicon

wikipedia

https://en.wikipedia.org/wiki/Approximation_theory

[122] Approximation theory - Wikipedia In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. What is meant by best and simpler will depend on the application.. A closely related topic is the approximation of functions by generalized Fourier series, that is, approximations based upon summation of a

statisticseasily.com favicon

statisticseasily

https://statisticseasily.com/glossario/what-is-approximation-theory-explained-in-detail/

[125] What is: Approximation Theory Explained in Detail Over the years, the theory has evolved significantly, incorporating various mathematical tools and concepts, including splines, wavelets, and Fourier series, which have broadened its applicability across different scientific domains. Key Concepts in Approximation Theory

projectthesis.com.ng favicon

projectthesis

https://projectthesis.com.ng/thesis/approximation-theory-and-its-applications-in-engineering-complete-phd-and-masters-thesis/

[128] Approximation theory and its applications in engineering - Complete Phd ... 5.4 Practical Implications. Brief Overview: Approximation theory is a branch of mathematics that deals with the approximation of complex functions by simpler functions. It has many applications in engineering, where it is used to simplify complex models and make them more computationally efficient.

sciencedirect.com favicon

sciencedirect

https://www.sciencedirect.com/book/9780443291418/approximation-theory-and-applications

[129] Approximation Theory and Applications | ScienceDirect Approximation Theory and Applications: Piecewise Linear and Generalized Functions presents the main provisions of approximation theory, and considers existing and new methods for approximating piecewise linear and generalized functions, widely used to solve problems related to mathematical modeling of systems, processes, and phenomena in fields ranging from engineering to economics.

sciencedirect.com favicon

sciencedirect

https://www.sciencedirect.com/science/article/pii/0315086089900980

[131] P. L. Chebyshev (1821-1894) and his contacts with ... - ScienceDirect Weierstrass who, together with Chebyshev, became the second co-founder of approximation theory (with the theorem of 1885 named after him), already criticized Chebyshev's methods of 1857 in that same year; he preferred to solve the problem using Jacobi's theory of elliptic functions, which gave a "clearer and deeper insight into the essence of

link.springer.com favicon

springer

https://link.springer.com/book/10.1007/0-8176-4475-X

[133] The History of Approximation Theory - Springer The final chapter emphasizes the important work of the Russian Jewish mathematician Sergei Bernstein, whose constructive proof of the Weierstrass theorem and extension of Chebyshev's work serve to unify East and West in their approaches to approximation theory.

pmc.ncbi.nlm.nih.gov favicon

nih

https://pmc.ncbi.nlm.nih.gov/articles/PMC6825274/

[138] Definitions, methods, and applications in interpretable machine learning In the context of ML, there are 2 areas where errors can arise: when approximating the underlying data relationships with a model (predictive accuracy) and when approximating what the model has learned using an interpretation method (descriptive accuracy). Post hoc interpretability (Section 6) involves using methods to extract information from a trained model (with no effect on predictive accuracy). Different model-based interpretability methods provide different ways of increasing descriptive accuracy by constructing models which are easier to understand, sometimes resulting in lower predictive accuracy. Thus, an effective way of increasing the potential uses for model-based interpretability is to devise new modeling methods which produce higher predictive accuracy while maintaining their high descriptive accuracy and relevance.

mdpi.com favicon

mdpi

https://www.mdpi.com/2227-7390/11/4/983

[140] On the Exact Evaluation of Integrals of Wavelets - MDPI Wavelet expansions are a powerful tool for constructing adaptive approximations. For this reason, they find applications in a variety of fields, from signal processing to approximation theory. Wavelets are usually derived from refinable functions, which are the solution of a recursive functional equation called the refinement equation.

library.fiveable.me favicon

fiveable

https://library.fiveable.me/key-terms/introduction-engineering/spline-approximation

[141] Spline approximation - (Intro to Engineering) - Fiveable Spline approximation is a numerical method used to construct a piecewise polynomial function, known as a spline, that can closely approximate a given set of data points. This technique is particularly useful in interpolation and smoothing of data, allowing for more flexible and accurate modeling compared to traditional polynomial fitting.

encyclopediaofmath.org favicon

encyclopediaofmath

https://encyclopediaofmath.org/wiki/Spline_approximation

[142] Spline approximation - Encyclopedia of Mathematics Methods of spline approximation are closely connected with the numerical solution of partial differential equations by the finite-element method, which is based on the Ritz method with a special choice of basis functions. In this method, one chooses piecewise-polynomial functions (i.e. splines, cf. Spline) as basis functions.

en.wikipedia.org favicon

wikipedia

https://en.wikipedia.org/wiki/Spline_interpolation

[143] Spline interpolation - Wikipedia In the mathematical field of numerical analysis, spline interpolation is a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. That is, instead of fitting a single, high-degree polynomial to all of the values at once, spline interpolation fits low-degree polynomials to small subsets of the values, for example, fitting nine cubic polynomials

web.stanford.edu favicon

stanford

https://web.stanford.edu/class/math114/lecture_notes/splines.pdf

[145] PDF Spline Interpolation We've approached the interpolation problem by choosing (high-degree) polyno-mials for our basis functions φi : f(x) = n j=0 cjφj(x). This approach can be efficient (recall the barycentric form of the Lagrange interpolant), but using high degree poly-nomials can lead to large errors due to erratic oscillations, especially near the interval endpoints.

tandfonline.com favicon

tandfonline

https://www.tandfonline.com/doi/full/10.1080/19942060.2022.2119281

[151] Performance improvement of machine learning models via wavelet theory ... Wavelets are also able to separate fine details within a signal in a manner similar to an enhanced Fourier transform (Sifuzzaman et al ... Performance evaluation of models. ... pre-processing with EMD or EEMD will improve the performance of machine learning models for estimation of river streamflow (Huang et al., Citation 2014; Liu et al

cambridge.org favicon

cambridge

https://www.cambridge.org/core/books/exact-constants-in-approximation-theory/polynomials-and-spline-functions-as-approximating-tools/6F2E635D1E0DD8F3D98C380F1833B240

[156] 2 - Polynomials and spline functions as approximating tools They have definite advantages, in comparison with polynomials, for computer realizations and, moreover, it turns out that they are the best approximation tool in many important cases. In this chapter we give the general properties of polynomials and polynomial splines that it will be necessary to know in the rest of the book.

encyclopediaofmath.org favicon

encyclopediaofmath

https://encyclopediaofmath.org/wiki/Spline_approximation

[157] Spline approximation - Encyclopedia of Mathematics These methods were studied before problems of best approximation by splines, and attention has centred on approximation by interpolation splines (cf. Interpolation spline; see , , ). These often give the same order of approximation as splines of best approximation, which is one of the advantages over interpolation by polynomials.

dsp.stackexchange.com favicon

stackexchange

https://dsp.stackexchange.com/questions/38310/when-is-cubic-spline-interpolation-better-than-an-interpolating-polynomial

[158] When is cubic spline interpolation better than an interpolating polynomial? The following plot is a slight variation of an example in a text book. The author used this example to illustrate that an interpolating polynomial over equally spaced samples has large oscillations near the ends of the interpolating interval. Of course cubic spline interpolation gives a good approximation over the whole interval.

math.stackexchange.com favicon

stackexchange

https://math.stackexchange.com/questions/14212/what-are-the-advantages-disadvantages-of-using-splines-smoothed-splines-and

[159] What are the advantages / disadvantages of using splines, smoothed ... $\begingroup$ In a nutshell: use splines if your curve has to pass through the given data (with the implicit assumption that your data is "exact"), and use smoothing splines if you have "noisy" data and you want to get a feel for how the data varies. FWIW, my opinion is that smoothing data for purposes of display is cheating; show the noisy data, and figure out why your data is noisy to begin

onlinelibrary.wiley.com favicon

wiley

https://onlinelibrary.wiley.com/doi/toc/10.1155/9303.si.269720

[164] Approximation Methods: Theory and Applications: Journal of Function Spaces Approximation theory is one of the most active research areas because of its crucial applications in many branches of science. The theory has a role in both mathematical sciences (e.g. constructive approximation of functions, solutions of partial and integral equations, etc) and engineering sciences (e.g. computer-aided geometric design, image processing, etc).

link.springer.com favicon

springer

https://link.springer.com/book/10.1007/978-94-015-8577-4

[165] Approximation Theory, Wavelets and Applications | SpringerLink Approximation Theory, Wavelets and Applications draws together the latest developments in the subject, provides directions for future research, and paves the way for collaborative research. The main topics covered include constructive multivariate approximation, theory of splines, spline wavelets, polynomial and trigonometric wavelets, interpolation theory, polynomial and rational approximation.

search.lib.jmu.edu favicon

jmu

https://search.lib.jmu.edu/discovery/fulldisplay/alma991016562509906271/01JMU_INST:01JMU

[166] Approximation Theory and Related Applications - James Madison University In recent years, we have seen a growing interest in various aspects of approximation theory. This happened due to the increasing complexity of mathematical models that require computer calculations and the development of the theoretical foundations of the approximation theory. Approximation theory has broad and important applications in many areas of mathematics, including functional analysis

cards.algoreducation.com favicon

algoreducation

https://cards.algoreducation.com/en/content/R9pFLJt_/fundamentals-approximation-theory

[167] Approximation Theory | Algor Cards - Algor Education Approximation theory includes a repertoire of formulas and methods that are essential for applying mathematical principles to practical problems. The Taylor Series, for example, allows for the approximation of functions using an infinite sum of terms calculated from the function's derivatives at a single point.

link.springer.com favicon

springer

https://link.springer.com/book/10.1007/978-1-0716-0480-9

[168] Approximation Methods in Science and Engineering Approximation Methods in Engineering and Science covers fundamental and advanced topics in three areas: Dimensional Analysis, Continued Fractions, and Stability Analysis of the Mathieu Differential Equation. Throughout the book, a strong emphasis is given to concepts and methods used in everyday calculations.

hlevkin.com favicon

hlevkin

https://hlevkin.com/hlevkin/articles/Local_polynomial_approximation.pdf

[169] PDF polynomials of two variables instead of orthogonal polynomials, in considering rounded and cutting blocks in 3D polynomial approximation. Key words: Local Polynomial Approximation, image processing, image filtration, gradient, image resize, video filtration, image convolution, convolution mask, noise filtration, facet model 1. Introduction.

researchgate.net favicon

researchgate

https://www.researchgate.net/publication/336018019_The_development_of_approximation_theory_and_some_proposed_applications

[176] (PDF) The development of approximation theory and some proposed ... Then, we provide some open problems of approximation theory in mechanical engineering and ophthalmology, through examples in gear transmission and medical optics.

restack.io favicon

restack

https://www.restack.io/p/machine-learning-answer-approximation-theory-cat-ai

[184] Approximation Theory in Machine Learning - Restackio Approximation theory plays a crucial role in the development and understanding of machine learning models. It provides the mathematical foundation for how well a model can approximate a function based on a finite set of data points.

mkai.org favicon

mkai

https://mkai.org/ai-global-news/a-gentle-introduction-to-approximation/

[185] A Gentle Introduction To Approximation - MKAI When it comes to machine learning tasks such as classification or regression, approximation techniques play a key role in learning from the data. Many machine learning methods approximate a function or a mapping between the inputs and outputs via a learning algorithm.

medium.com favicon

medium

https://medium.com/@ML-STATS/understanding-the-universal-approximation-theorem-8bd55c619e30

[186] Understanding the Universal Approximation Theorem | by Machine Learning ... Here, we explore the UAT's fundamental role in neural network theory, emphasizing its guarantee that a suitably designed network can approximate any function, given enough neurons and the right

link.springer.com favicon

springer

https://link.springer.com/article/10.1134/S0361768821080272

[187] Optimization of Neural Network Training for Image Recognition Based on ... The use of polynomials for the approximation of multidimensional functions opens up new opportunities for studying ANNs. For example, orthogonal transformations have applications for image and speech signal processing, feature selection in image recognition, generalized Wiener filtering, and spectroscopy . An ANN can be considered not only as

arxiv.org favicon

arxiv

https://arxiv.org/pdf/2407.17480v3

[188] Universal Approximation Theory: The Basic Theory for Deep Learning ... range of applications, encompassing tasks such as image segmentation (Minaee et al. 2021; Wang et al. 2022a), clas-sification (Rawat and Wang 2017; Wang et al. 2017), video synthesis (Wang et al. 2018, 2019a), and restoration (Wang et al. 2019b; Nah et al. 2019). This broad applicability high-lights its enormous technical potential and

link.springer.com favicon

springer

https://link.springer.com/book/10.1007/978-3-030-94339-4

[198] Mathematical and Computational Methods for Modelling, Approximation and ... This book contains plenary lectures given at the International Conference on Mathematical and Computational Modeling, Approximation and Simulation, dealing with three very different problems: reduction of Runge and Gibbs phenomena, difficulties arising when studying models that depend on the highly nonlinear behaviour of a system of PDEs, and data fitting with truncated hierarchical B-splines

link.springer.com favicon

springer

https://link.springer.com/book/10.1007/978-3-319-31281-1

[238] Mathematical Analysis, Approximation Theory and Their Applications Recent and significant developments in approximation theory, special functions and q-calculus along with their applications to mathematics, engineering, and social sciences are discussed and analyzed. Each chapter enriches the understanding of current research problems and theories in pure and applied research.

sciencedirect.com favicon

sciencedirect

https://www.sciencedirect.com/topics/mathematics/approximation-theory

[239] Approximation Theory - an overview | ScienceDirect Topics Approximation theory is the branch of mathematics which studies the process of approximating general functions by simple functions such as polynomials, finite elements or Fourier series. ... This area of mathematics is extremely important and includes the subjects of least-squares methods, minimal projections, orthogonal polynomials, optimal

statisticseasily.com favicon

statisticseasily

https://statisticseasily.com/glossario/what-is-approximation-theory-explained-in-detail/

[240] What is: Approximation Theory Explained in Detail Approximation Theory is a branch of mathematical analysis that focuses on how functions can be approximated with simpler, more manageable functions. This field is essential in various applications, including numerical analysis, computer science, and engineering, where exact solutions are often impractical or impossible to obtain.

statisticseasily.com favicon

statisticseasily

https://statisticseasily.com/glossario/what-is-approximation-theory-explained-in-detail/

[242] What is: Approximation Theory Explained in Detail Challenges in Approximation Theory. Despite its usefulness, Approximation Theory faces several challenges, particularly when dealing with high-dimensional spaces or non-linear functions. The curse of dimensionality can make it increasingly difficult to find effective approximations as the number of dimensions increases.

sciencedirect.com favicon

sciencedirect

https://www.sciencedirect.com/book/9780443291418/approximation-theory-and-applications

[243] Approximation Theory and Applications | ScienceDirect However, challenges often arise when constructing solutions over the entire domain of these functions, requiring the use special mathematical methods to put theory into practice. ... Approximation Theory and Applications: Piecewise Linear and Generalized Functions presents the main provisions of approximation theory, and considers existing and

people.maths.ox.ac.uk favicon

ox

https://people.maths.ox.ac.uk/trefethen/ATAP/

[244] Approximation Theory and Approximation Practice - University of Oxford There is a bias toward theorems and methods for analytic functions, which appear so often in practical approximation, rather than on functions at the edge of discontinuity with their seductive theoretical challenges. Original sources are cited rather than textbooks, and each item in the 27-page bibliography is annotated with an editorial comment.

link.springer.com favicon

springer

https://link.springer.com/book/10.1007/978-94-015-8577-4

[246] Approximation Theory, Wavelets and Applications | SpringerLink Approximation Theory, Wavelets and Applications draws together the latest developments in the subject, provides directions for future research, and paves the way for collaborative research. The main topics covered include constructive multivariate approximation, theory of splines, spline wavelets, polynomial and trigonometric wavelets, interpolation theory, polynomial and rational

arxiv.org favicon

arxiv

https://arxiv.org/abs/2411.07591

[248] Overcoming the Curse of Dimensionality in Reinforcement Learning ... Reinforcement Learning (RL) algorithms are known to suffer from the curse of dimensionality, which refers to the fact that large-scale problems often lead to exponentially high sample complexity. A common solution is to use deep neural networks for function approximation; however, such approaches typically lack theoretical guarantees. To provably address the curse of dimensionality, we observe

sam.math.ethz.ch favicon

ethz

https://www.sam.math.ethz.ch/sam_reports/reports_final/reports2017/2017-44_fp.pdf

[249] PDF Another area where the curse of dimensionality has been an essential obstacle is machine learning and data analysis, where the complexity of nonlinear regression models, for example, goes up exponentially with the dimensionality. In both cases the essential problem we face is how to represent or approximate a nonlinear function in high dimensions.

link.springer.com favicon

springer

https://link.springer.com/article/10.1007/s10462-020-09880-z

[252] Major advancements in kernel function approximation Kernel based methods have become popular in a wide variety of machine learning tasks. They rely on the computation of kernel functions, which implicitly transform the data in its input space to data in a very high dimensional space. Efficient application of these functions have been subject to study in the last 10 years. The main focus was on improving the scalability of kernel based methods

sciencedirect.com favicon

sciencedirect

https://www.sciencedirect.com/science/article/pii/S0020025513005975

[254] Reinforcement learning algorithms with function approximation: Recent ... In recent years, the research on reinforcement learning (RL) has focused on function approximation in learning prediction and control of Markov decision processes (MDPs). Secondly, learning control algorithms with function approximation are surveyed, where the main focus is put on highly efficient RL algorithms with function approximation such as fitted-Q iteration, approximate policy iteration, and adaptive critic designs (ACDs). RL algorithms with function approximation for learning control in MDPs Until recently, the three main categories of approximate RL methods for learning control have included value function approximation (VFA) , policy search , and actor–critic methods , , . As an interdisciplinary area, RL and ADP algorithms with function approximation have attracted many research interests from different domains such as machine learning, control theory, operations research, and robotics.

pmc.ncbi.nlm.nih.gov favicon

nih

https://pmc.ncbi.nlm.nih.gov/articles/PMC2865881/

[255] Statistical challenges of high-dimensional data - PMC The practical and theoretical challenges posed by the large p /small n settings, along with the ferment of recent research, formed the backdrop to the 2008 research programme 'Statistical Theory and Methods for Complex, High-dimensional Data' at the Isaac Newton Institute for Mathematical Sciences, which stimulated this Theme Issue.

people.maths.ox.ac.uk favicon

ox

https://people.maths.ox.ac.uk/trefethen/ATAP/

[257] Approximation Theory and Approximation Practice - University of Oxford There is a bias toward theorems and methods for analytic functions, which appear so often in practical approximation, rather than on functions at the edge of discontinuity with their seductive theoretical challenges. Original sources are cited rather than textbooks, and each item in the 27-page bibliography is annotated with an editorial comment.

math.unipd.it favicon

unipd

https://www.math.unipd.it/~demarchi/Slides/DeMarchi_MACMAS19.pdf

[258] PDF Interpolation by polynomials and rational functions ofdiscontinuous functions is an historical approach and well-studied. Two well-known phenomena are theRunge and Gibbseffects [Runge 1901, Gibbs 1899]. Interpolation by kernels, mainlyRadial Basis Functions, are suitable for high-dimensional scattered data problems [Hardy 1971, MJD

sciencedirect.com favicon

sciencedirect

https://www.sciencedirect.com/science/article/pii/S0893608013001950

[260] Multivariate neural network operators with sigmoidal activation ... Constructive multivariate approximation algorithms based on sigmoidal functions are important since they play a central role in typical applications of neurocomputing processes concerning high-dimensional data. Applications of NNs with sigmoidal functions in Numerical Analysis, for instance, to the numerical solution of Volterra integral and integro-differential equations by suitable

link.springer.com favicon

springer

https://link.springer.com/article/10.1007/s00521-011-0604-8

[261] Multivariate numerical approximation using constructive Various problems concerning the applications in many different disciplines such as computer science, engineering, and physics can be converted into the problems of approximating multivariate functions, and starts more and more involving to the four-dimensional or more than four-dimensional data hypersurface fitting actual problem, like the

arxiv.org favicon

arxiv

https://arxiv.org/abs/1912.04310

[269] [1912.04310] Efficient approximation of high-dimensional functions with ... In this paper, we develop a framework for showing that neural networks can overcome the curse of dimensionality in different high-dimensional approximation problems. Our approach is based on the notion of a catalog network, which is a generalization of a standard neural network in which the nonlinear activation functions can vary from layer to layer as long as they are chosen from a predefined

sciencedirect.com favicon

sciencedirect

https://www.sciencedirect.com/science/article/pii/S004578252400762X

[276] A Review of Recent Advances in Surrogate Models for Uncertainty ... A Review of Recent Advances in Surrogate Models for Uncertainty Quantification of High-Dimensional Engineering Applications - ScienceDirect A Review of Recent Advances in Surrogate Models for Uncertainty Quantification of High-Dimensional Engineering Applications Challenges in surrogate modeling for high-dimensional spaces are comprehended. High-dimensional benchmark functions assessing the surrogate models are provided. Nonetheless, as the complexity of the problem increases and the number of input variables grows, the computational burden of constructing an efficient surrogate model also rises, leading to the so-called curse of dimensionality in uncertainty propagation from inputs to outputs. This paper reviews the developments of the past years in surrogate modeling for high-dimensional inputs, with the goal of quantifying output uncertainty. For all open access content, the relevant licensing terms apply.

ncbi.nlm.nih.gov favicon

nih

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8874272/

[277] Regression and Classification With Spline-Based Separable Expansions Approximating multivariate functions in high dimensions quickly becomes infeasible due to the curse of dimensionality. ... B-splines satisfy, in comparison to approximation by pure polynomials, some favorable properties as they are compactly supported—B-splines are nonzero only on a small interval—and allow for more adaptive local