In this work, we study regularization of ill-posed linear problems in Reproducing Kernel Hilbert Spaces. We provide a theoretical analysis of the developed methods and demonstrate their practical importance in real-life applications. We start with the problems appearing in geoscience, namely with the approximation of functions defined on a sphere. Then we naturally arrive at spherical pseudo-differential equations, relating two functions defined on concentric spheres. In such a spherical set-up, we study the polynomial approximation of unknown quantities through two-parameter regularization in Reproducing Kernel Hilbert Spaces. We analyze the accuracy of the proposed methods in C-space and provide guidelines for the adaptive choice of the regularization parameters together with the adaptive choice of the corresponding Reproducing Kernel Hilbert Spaces. Then we switch to learning theory, where the use of regularization in Reproducing Kernel Hilbert Spaces is more common. We pay attention on ranking problems where the known convergence rates were suboptimal compared to regression learning. We close this gap by proving the same convergence rate. Moreover, we analyze regularized ranking in more general setting. Additionally, we present an approach to constructing a new ranking function at the top of a family of previously constructed ones. In this approach a new function appears as a linear combination of the constructed rankers with coefficients that can be effectively estimated from a given training set by means of the so-called linear functional strategy in such a way that the resulting combination will be almost as good as the best one. We also prove that in suitable spaces no further regularization is necessary for constructing the resulting ranking function. We demonstrate the application of the developed ranking algorithms to some problems of Diabetes technology.