

The method has great potential to enhance and replace time-consuming and expensive conventional approaches for pottery documentation, i.e., 2D photographs and drawing on paper with subsequent digitization of the drawings.

For example, right now we are using "File->import raster", then right mouse button over the raster and set raster camera and we choose the agisoft format (this is for the latest meshlab version v2016.12)īut now that we changed to the latest meshlab version and are having another problem with not seing the point cloud at all (see ) so we are first focusing on solving that issue and only then can go back to the alignment.This paper presents a new rapid, low-cost method for the large-scale documentation of pottery sherds through simultaneous multiple 3D model capture using Structure from Motion (SfM). Then you read the mpl file by doing "open project"? There are other ways to import a raster and set the camera pose in meshlab, but I am not sure they all have the same behaviour. However, since we were having problems we just tried to simplify everything and are now making sure case 1 works well, i.e., that in case 1 we get images well aligned.įrom what I understood what you do is create a meshlab project file using a python xml parser. See bellow.Ĭoncerning the cases you mention, we are doing reconstruction of a room using a kinect so in the end we will be using case 3. I will try them but right I am stuck with another problem where I cannot see the point cloud, and therefore cannot see if the alignment is correct. The rest of the parameters gyou mention makes sense to me. So you mean if, after a calibration, I get the intrinsic parameter for the displacement of the lens principal point cx = 965.3 and my image is 1920 pixels wide, then the value to give to meshlab should be: Which case are you trying to solve? I'll be happy to share more information if needed. Things can look off even if everything is setup correctly (especially on the edges of features).
#Allignment function meshlab software
LensDistortion=> set to 0 (but undistort the color image with an external software using the clibration paramters. Round the results to an integer since there is a bug in Meshlab. If I have known calibration parameters (I use MATLAB but opencv should work just as well) you need to set the VCGCamera the following way:ĬenterPx=> image size x - calibrated center pixel x (and same for y). The alignment is good but never perfect, and it should be correct?Īnd both overlayed, where you can see the misalignmentįirst case: assuming the extrinsics are the same for the color image and the depth image and the depth image is a 4 by 4 unity matrix. Here is my last try at the VCGCamera xml file to be loaded in meshlab: My guess is I am not doing the VCGCamera file which give the information about the camera's pose and intrinsics correctly. Note that the point cloud and the image (raster) are take with the camera standing still so they should align (perfectly) I think. I cannot make the color in the vertices align propperly with the color in the imported raster. I am importing the point cloud which has color associated to the vertices.Īt the same time, I also import a raster of the associated RGB image. a kinect or an xtion), an I am trying to use it for creating a textured mesh.
