Interpolate value at a point using Kriging - arcpy

Is it possible to use Kriging ,in the spatial analyst toolbox, to interpolate a value at a point(instead of a raster)? If not then how would you approach this problem?

The SA kriging tool inputs points, and outputs an interpolated raster. If you are only interested in the value at 1 point, you can use Extract Value to Point tool to assign the interpolated (Kriging) value to the point you are interested in.

Related

In what situation we apply log function to geodist function to calculate the distance

I have two codes and want to know when and why do we apply them
1. Geodist(lat1,long1,lat2,long2,'DM');
2.log(geodist(lat1,long1,lat2,long2,'DM')+1);
I want to know when and why do we use the second code to calculate distance instead of first one.
Geodist() provides a geographical distance. The second function applies a transformation to the distance. It can be used similar to other area's, when you have data that has a large range and you don't want the larger values to drown out the smaller values. Or if the distribution of the data needs to be normalized for analysis.

CGAL rounding points so that they remain coplanar

I have a set of Exact_predicates_exact_constructions_kernel::Point_3 points that are coplanar. I would like to be able to serialize the points to a database in double precision and then be able to read the points back into an exact precision space and have them be coplanar. Specifically, my workflow is CGAL -> Postgres using Postgis -> SFCGAL. SFCGAL requires that the points be coplanar and uses exact constructions.
I know how to write the geometry, but I'm not sure how to go about rounding the points so that they remain coplanar. Obviously the points read will have a loss of precision and will be slightly transformed compared to the originals, but I only need them to be within roughly a e-04 distance from their respective points.
Unfortunately, projecting the points onto a plane after reading them is a last resort option. It doesn't work very well for my purposes.

Best algorithm to interpolate on a grid

I have a set points whose coordinates are given by the arrays x, y and z and the value of the density field in each point is stored in the array d.
I would like to reconstruct the density field on a uniform grid. What's the best algorithm to do that?
I know that in python, the scipy module come in handy with the griddata function but I would like to write my own code, I just need a hint.
If you have some sort of scalar field and the points are the origins of the field, you can implement a brute force approach by walking all lattice points and calculating the field intensity given the sources. There are both recursive methods that allow "blanking" wide volumes where the field is more or less constant, and techniques to save some CPU time by calculating the variations from one point to the next.
If the points you have are samplings of a value, then you will have to decompose your space in volumes and interpolate the values. You can employ a simple Voronoi decomposition - this is usually done in 2D for precipitation measurements - or a Delaunay tetrahedralization (you can look into TetGen's documentation). The first approach assumes that the function is constant throughout each Voronoi volume; the last allows rendering a trilinear interpolation.
If you need to smooth a 3D grid, the trilinear interpolation looks like the best approach.
There are also other methods used for fast visualization, that involve maintaining a list of 3D points in order of distance from any one given point in your regular grid. When moving through the grid, you recalculate distances using quadratic increments. Then, you perform a simple interpolation based on a subset of points of chosen cardinality (i.e., if you consider the four nearest points at distances d1..d4, you would calculate the value in P by proportionally weighing the values v1..v4). This approach is fast and easy to implement by yourself, but be warned that it underperforms wherever the minimum distance between points is less than the lattice step (you can compensate by considering more points where this happens; and the effect is less evident if the sampled function is smooth at the same scale).
If you want to implement a mathematical method yourself, you need to learn the theory, of course. In this case, it's 3D scattered data interpolation.
Wikipedia, MATLAB help and scipy help say there are at least half a dozen different methods. WP has a fairly good description of them and there's a comparison article but I strongly suggest you find something in your native language on such a terminology-intensive subject.
One approach is to form the Delaunay triangulation of the scattered points [x,y,z], (actually a tetrahedralisation in your 3d case!) and perform interpolation within each element using a linear representation of the density field, defined at the tetrahedron vertices.
To evaluate the density at each structured grid point you would (i) determine which tetrahedron the point lay within and (ii) evaluate the linear interpolant.
Forming the Delaunay triangulation is non-trivial, put there are a few good libraries that can be used for this, depending on your language of choice. One good option is CGAL.
Hope this helps.

Calculate Mapping of Nearest Points of 2 matrices

I have two matrices A and B. Each of them has 2 columns having the coordinates of a point ( x , y ).
I need to compute a mapping of points from A to B such that the points have least euclidean distance among them.
Essentially I am trying to emulate what sift does on images but will not carry out the steps that sift does for matching the points...
Thus for all points in A, I compute euclidean distance with all points in B and then remove the mapping of 2 points which have the least distance. Then i continue to do this until A and B are both empty.
Could someone tell me what could be the most efficient way of doing this ?
EDIT
Can somebody help me ... The issue I am facing is that I need to compute all v/s all distances before selecting the minimum of them as the first mapping. Then I need to do this all over again making the computation really long...
Is there any way this can be done efficiently in MATLAB ?
Are you referring to the Procrustes distance between the two different configurations of points? If so, Matlab has a built-in function that computes the smallest-norm transformation that brings the points into alignment (this is the Procrustes distance).
See this documentation for how to use it. If you don't have the Statistics Toolbox, then you should check the Matlab Central File Exchange first to see if anyone's written a non-toolbox version of the procrustes() function before seeking to write your own.

Numerical integration of a function with values known at a given point set (finite and discrete) over an area bounded by discrete points?

Let D be the area bounded by a series of points {x_i,y_i} (1<=i<=N).(The area need not to be convex and the points are supposed to go along the boundary curve.)
Let f be a function defined on D but we only know its values on a given point set (finite and discrete), say {x'_i,y'_i,f(x'_i,y'_i)} (1<=i<=N').(The given data set need not to be "dense" in D.)
How can I do numeric integration of f over D?
Here is what I think:
1) First we should approximate the boundary of D by segments between those series of points.
2) Then we should do some interpolation on the given data set. However, interpolation in two-dimension is not always possible. Then I get stuck.
Can you please help? Thank you.
If you were able to triangulate your points, the jobs was done: In each triangle, you know the function values at the corner points, and integrate that via
triangle_area * (val1 + val2 + val3) / 3.0
While convex triangulation is a solved problem with lots of tools available (check out qhull, for example), nonconvex triangulation is a lot harder. Anyhow, digging in this direction will probably get you somewhere.
I'd write the solution as a contour integral and use the sum of a Gaussian or log quadrature 1D numerical integration over each piecewise segment on the boundary. Log quadrature is useful if the function is singular at some point on the curve.
You have to know the function values at the endpoints of each piecewise curve. You assume a particular interpolation function (linear to start, higher order if you'd like), and do the numerical integration by interpolating between the end point values.
If you get that far, I'd recommend checking convergence by doubling the number of curves along the contour and re-integrating. If the integral value doesn't change after one or two iterations you can consider yourself converged.
If you're saying you don't know the values of the function anywhere on the curve, then you can't do the integration.

Resources