Welcome to TUM BBB. tum. The following seven sequences used in this analysis depict different situations and intended to test robustness of algorithms in these conditions. This repository is the collection of SLAM-related datasets. 159. RGB-D dataset and benchmark for visual SLAM evaluation: Rolling-Shutter Dataset: SLAM for Omnidirectional Cameras: TUM Large-Scale Indoor (TUM LSI) Dataset:ORB-SLAM2的编译运行以及TUM数据集测试. Compared with art-of-the-state methods, experiments on the TUM RBG-D dataset, KITTI odometry dataset, and practical environment show that SVG-Loop has advantages in complex environments with varying light, changeable weather, and dynamic interference. In the HSL color space #34526f has a hue of 209° (degrees), 36% saturation and 32% lightness. This zone conveys a joint 2D and 3D information corresponding to the distance of a given pixel to the nearest human body and the depth distance to the nearest human, respectively. Traditional visionbased SLAM research has made many achievements, but it may fail to achieve wished results in challenging environments. 2% improvements in dynamic. The video shows an evaluation of PL-SLAM and the new initialization strategy on a TUM RGB-D benchmark sequence. Experiments on public TUM RGB-D dataset and in real-world environment are conducted. 159. This is forked from here, thanks for author's work. Available for: Windows. deTUM-Live, the livestreaming and VoD service of the Rechnerbetriebsgruppe at the department of informatics and mathematics at the Technical University of Munich What is the IP address? The hostname resolves to the IP addresses 131. © RBG Rechnerbetriebsgruppe Informatik, Technische Universität München, 2013–2018, [email protected] provide one example to run the SLAM system in the TUM dataset as RGB-D. The sequences are from TUM RGB-D dataset. General Info Open in Search Geo: Germany (DE) — Domain: tum. Tumblr / #34526f Hex Color Code. tum-rbg (RIPE) Prefix status Active, Allocated under RIPE Size of prefixThe TUM RGB-D benchmark for visual odometry and SLAM evaluation is presented and the evaluation results of the first users from outside the group are discussed and briefly summarized. Configuration profiles. GitHub Gist: instantly share code, notes, and snippets. 21 80333 Munich Germany +49 289 22638 +49. The dynamic objects have been segmented and removed in these synthetic images. The results demonstrate the absolute trajectory accuracy in DS-SLAM can be improved by one order of magnitude compared with ORB-SLAM2. TUM RGB-D Benchmark Dataset [11] is a large dataset containing RGB-D data and ground-truth camera poses. Mystic Light. The TUM RGB-D dataset , which includes 39 sequences of offices, was selected as the indoor dataset to test the SVG-Loop algorithm. manhardt, nassir. Each sequence contains the color and depth images, as well as the ground truth trajectory from the motion capture system. Last update: 2021/02/04. The RGB-D dataset[3] has been popular in SLAM research and was a benchmark for comparison too. In the end, we conducted a large number of evaluation experiments on multiple RGB-D SLAM systems, and analyzed their advantages and disadvantages, as well as performance differences in different. However, there are many dynamic objects in actual environments, which reduce the accuracy and robustness of. These tasks are being resolved by one Simultaneous Localization and Mapping module called SLAM. Our dataset contains the color and depth images of a Microsoft Kinect sensor along the ground-truth trajectory. sh . Export as Portable Document Format (PDF) using the Web BrowserExport as PDF, XML, TEX or BIB. This study uses the Freiburg3 series from the TUM RGB-D dataset. 2 On ucentral-Website; 1. AS209335 - TUM-RBG, DE Note: An IP might be announced by multiple ASs. Our dataset contains the color and depth images of a Microsoft Kinect sensor along the ground-truth trajectory of the sensor. Registered on 7 Dec 1988 (34 years old) Registered to de. Then, the unstable feature points are removed, thus. The TUM RGB-D dataset consists of RGB and depth images (640x480) collected by a Kinect RGB-D camera at 30 Hz frame rate and camera ground truth trajectories obtained from a high precision motion capture system. of 32cm and 16cm respectively, except for TUM RGB-D [45] we use 16cm and 8cm. Similar behaviour is observed in other vSLAM [23] and VO [12] systems as well. We require the two images to be. foswiki. 92. Qualitative and quantitative experiments show that our method outperforms state-of-the-art approaches in various dynamic scenes in terms of both accuracy and robustness. In case you need Matlab for research or teaching purposes, please contact support@ito. dePerformance evaluation on TUM RGB-D dataset. rbg. Tardós 24 State-of-the-art in Direct SLAM J. This is not shown. This file contains information about publicly available datasets suited for monocular, stereo, RGB-D and lidar SLAM. However, the pose estimation accuracy of ORB-SLAM2 degrades when a significant part of the scene is occupied by moving ob-jects (e. The last verification results, performed on (November 05, 2022) tumexam. Email: Confirm Email: Please enter a valid tum. [3] provided code and executables to evaluate global registration algorithms for 3D scene reconstruction system, and proposed the. Share study experience about Computer Vision, SLAM, Deep Learning, Machine Learning, and RoboticsRGB-live . See the list of other web pages hosted by TUM-RBG, DE. A robot equipped with a vision sensor uses the visual data provided by cameras to estimate the position and orientation of the robot with respect to its surroundings [11]. Finally, run the following command to visualize. The experiments on the public TUM dataset show that, compared with ORB-SLAM2, the MOR-SLAM improves the absolute trajectory accuracy by 95. Both groups of sequences have important challenges such as missing depth data caused by sensor. Related Publicationsperforms pretty well on TUM RGB -D dataset. TUMs lecture streaming service, currently serving up to 100 courses every semester with up to 2000 active students. tum. This may be due to: You've not accessed this login-page via the page you wanted to log in (eg. An Open3D Image can be directly converted to/from a numpy array. It contains walking, sitting and desk sequences, and the walking sequences are mainly utilized for our experiments, since they are highly dynamic scenarios where two persons are walking back and forth. The energy-efficient DS-SLAM system implemented on a heterogeneous computing platform is evaluated on the TUM RGB-D dataset . New College Dataset. On the TUM-RGBD dataset, the Dyna-SLAM algorithm increased localization accuracy by an average of 71. It is able to detect loops and relocalize the camera in real time. from publication: Evaluating Egomotion and Structure-from-Motion Approaches Using the TUM RGB-D Benchmark. © RBG Rechnerbetriebsgruppe Informatik, Technische Universität München, 2013–2018, rbg@in. . ORG top-level domain. two example RGB frames from a dynamic scene and the resulting model built by our approach. This project was created to redesign the Livestream and VoD website of the RBG-Multimedia group. de email address to enroll. msg option. 17123 [email protected] human stomach or abdomen. A novel semantic SLAM framework detecting potentially moving elements by Mask R-CNN to achieve robustness in dynamic scenes for RGB-D camera is proposed in this study. The TUM RGB-D benchmark [5] consists of 39 sequences that we recorded in two different indoor environments. 0/16. Living room has 3D surface ground truth together with the depth-maps as well as camera poses and as a result pefectly suits not just for bechmarking camera. This is an urban sequence with multiple loop closures that ORB-SLAM2 was able to successfully detect. The color images are stored as 640x480 8-bit RGB images in PNG format. 73% improvements in high-dynamic scenarios. We provide examples to run the SLAM system in the KITTI dataset as stereo or monocular, in the TUM dataset as RGB-D or monocular, and in the EuRoC dataset as stereo or monocular. The multivariable optimization process in SLAM is mainly carried out through bundle adjustment (BA). Ultimately, Section. ASN details for every IP address and every ASN’s related domains, allocation date, registry name, total number of IP addresses, and assigned prefixes. tum- / RBG-account is entirely seperate form the LRZ- / TUM-credentials. g. de tombari@in. The sequence selected is the same as the one used to generate Figure 1 of the paper. rbg. rbg. We provide a large dataset containing RGB-D data and ground-truth data with the goal to establish a novel benchmark for the evaluation of visual odometry and visual SLAM systems. de which are continuously updated. tum. 在这一篇博客(我参考了各位大佬的博客)主要在ROS环境下通过读取深度相机的数据,基于ORB-SLAM2这个框架实现在线构建点云地图(稀疏和稠密点云)和八叉树地图的构建 (Octomap,未来用于路径规划)。. These sequences are separated into two categories: low-dynamic scenarios and high-dynamic scenarios. The depth here refers to distance. 它能够实现地图重用,回环检测. 289. The depth here refers to distance. The Private Enterprise Number officially assigned to Technische Universität München by the Internet Assigned Numbers Authority (IANA) is: 19518. g the KITTI dataset or the TUM RGB-D dataset , where highly-precise ground truth states (GPS. Registrar: RIPENCC Route. Hotline: 089/289-18018. This project was created to redesign the Livestream and VoD website of the RBG-Multimedia group. tum. de(PTR record of primary IP) IPv4: 131. Dependencies: requirements. Here, RGB-D refers to a dataset with both RGB (color) images and Depth images. Compared with ORB-SLAM2 and the RGB-D SLAM, our system, respectively, got 97. VPN-Connection to the TUM set up of the RBG certificate Furthermore the helpdesk maintains two websites. News DynaSLAM supports now both OpenCV 2. Source: Bi-objective Optimization for Robust RGB-D Visual Odometry. de. Example result (left are without dynamic object detection or masks, right are with YOLOv3 and masks), run on rgbd_dataset_freiburg3_walking_xyz: Getting Started. There are two. The data was recorded at full frame rate (30 Hz) and sensor res-olution 640 480. Thus, there will be a live stream and the recording will be provided. In contrast to previous robust approaches of egomotion estimation in dynamic environments, we propose a novel robust VO based on. It not only can be used to scan high-quality 3D models, but also can satisfy the demand. 😎 A curated list of awesome mobile robots study resources based on ROS (including SLAM, odometry and navigation, manipulation) - GitHub - shannon112/awesome-ros-mobile-robot: 😎 A curated list of awesome mobile robots study resources based on ROS (including SLAM, odometry and navigation, manipulation)and RGB-D inputs. de TUM-RBG, DE. : to card (wool) as a preliminary to finer carding. AS209335 TUM-RBG, DE. Recording was done at full frame rate (30 Hz) and sensor resolution (640 × 480). We will send an email to this address with a link to validate your new email address. The following seven sequences used in this analysis depict different situations and intended to test robustness of algorithms in these conditions. Muenchen 85748, Germany {fabian. To obtain poses for the sequences, we run the publicly available version of Direct Sparse Odometry. אוניברסיטה בגרמניהDRG-SLAM is presented, which combines line features and plane features into point features to improve the robustness of the system and has superior accuracy and robustness in indoor dynamic scenes compared with the state-of-the-art methods. The single and multi-view fusion we propose is challenging in several aspects. in. General Info Open in Search Geo: Germany (DE) — AS: AS209335 - TUM-RBG, DE Note: An IP might be announced by multiple ASs. Volumetric methods with ours also show good generalization on the 7-Scenes and TUM RGB-D datasets. 5 Notes. We provide the time-stamped color and depth images as a gzipped tar file (TGZ). We provide examples to run the SLAM system in the KITTI dataset as stereo or. The TUM. Awesome visual place recognition (VPR) datasets. 230A tag already exists with the provided branch name. VPN-Connection to the TUM set up of the RBG certificate Furthermore the helpdesk maintains two websites. de. In EuRoC format each pose is a line in the file and has the following format timestamp[ns],tx,ty,tz,qw,qx,qy,qz. de) or your attending physician can advise you in this regard. Freiburg3 consists of a high-dynamic scene sequence marked 'walking', in which two people walk around a table, and a low-dynamic scene sequence marked 'sitting', in which two people sit in chairs with slight head or part of the limb. TUM-Live, the livestreaming and VoD service of the Rechnerbetriebsgruppe at the department of informatics and mathematics at the Technical University of Munich{"payload":{"allShortcutsEnabled":false,"fileTree":{"Examples/RGB-D":{"items":[{"name":"associations","path":"Examples/RGB-D/associations","contentType":"directory. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. position and posture reference information corresponding to. This project was created to redesign the Livestream and VoD website of the RBG-Multimedia group. g. de / rbg@ma. Rechnerbetriebsgruppe. Exercises will be held remotely and live on the Thursday slot about each 3 to 4 weeks and will not be. The first event in the semester will be an on-site exercise session where we will announce all remaining details of the lecture. e. : You need VPN ( VPN Chair) to open the Qpilot Website. de; Architektur. VPN-Connection to the TUM set up of the RBG certificate Furthermore the helpdesk maintains two websites. Cremers LSD-SLAM: Large-Scale Direct Monocular SLAM European Conference on Computer Vision (ECCV), 2014. YOLOv3 scales the original images to 416 × 416. Compared with art-of-the-state methods, experiments on the TUM RBG-D dataset, KITTI odometry dataset, and practical environment show that SVG-Loop has advantages in complex environments with varying light, changeable weather, and. In particular, our group has a strong focus on direct methods, where, contrary to the classical pipeline of feature extraction and matching, we directly optimize intensity errors. ORB-SLAM3-RGBL. 4. de / [email protected]. It involves 56,880 samples of 60 action classes collected from 40 subjects. Visual odometry and SLAM datasets: The TUM RGB-D dataset [14] is focused on the evaluation of RGB-D odometry and SLAM algorithms and has been extensively used by the research community. de In this part, the TUM RGB-D SLAM datasets were used to evaluate the proposed RGB-D SLAM method. tum. This file contains information about publicly available datasets suited for monocular, stereo, RGB-D and lidar SLAM. 5. The button save_traj saves the trajectory in one of two formats (euroc_fmt or tum_rgbd_fmt). [NYUDv2] The NYU-Depth V2 dataset consists of 1449 RGB-D images showing interior scenes, which all labels are usually mapped to 40 classes. The sequences contain both the color and depth images in full sensor resolution (640 × 480). Once this works, you might want to try the 'desk' dataset, which covers four tables and contains several loop closures. tum. Tumbler Ridge is a district municipality in the foothills of the B. [email protected] is able to detect loops and relocalize the camera in real time. g. A challenging problem in SLAM is the inferior tracking performance in the low-texture environment due to their low-level feature based tactic. 2. Diese sind untereinander und mit zwei weiteren Stratum 2 Zeitservern (auch bei der RBG gehostet) in einem Peerverband. the workspaces in the offices. Rainer Kümmerle, Bastian Steder, Christian Dornhege, Michael Ruhnke, Giorgio Grisetti, Cyrill Stachniss and Alexander Kleiner. Schöps, D. This is an urban sequence with multiple loop closures that ORB-SLAM2 was able to successfully detect. Check the list of other websites hosted by TUM-RBG, DE. Contribution. Digitally Addressable RGB. Maybe replace by your own way to get an initialization. 73% improvements in high-dynamic scenarios. TUM-Live, the livestreaming and VoD service of the Rechnerbetriebsgruppe at the department of informatics and mathematics at the Technical University of Munich. However, the method of handling outliers in actual data directly affects the accuracy of. Both groups of sequences have important challenges such as missing depth data caused by sensor range limit. It provides 47 RGB-D sequences with ground-truth pose trajectories recorded with a motion capture system. Download the sequences of the synethetic RGB-D dataset generated by the authors of neuralRGBD into . We also provide a ROS node to process live monocular, stereo or RGB-D streams. It is a challenging dataset due to the presence of. stereo, event-based, omnidirectional, and Red Green Blue-Depth (RGB-D) cameras. Die beiden Stratum 2 Zeitserver wiederum sind Clients von jeweils drei Stratum 1 Servern, welche sich im DFN (diverse andere. amazing list of colors!. Per default, dso_dataset writes all keyframe poses to a file result. Two different scenes (the living room and the office room scene) are provided with ground truth. Team members: Madhav Achar, Siyuan Feng, Yue Shen, Hui Sun, Xi Lin. de / rbg@ma. r. Usage. We recommend that you use the 'xyz' series for your first experiments. Our approach was evaluated by examining the performance of the integrated SLAM system. 3 ms per frame in dynamic scenarios using only an Intel Core i7 CPU, and achieves comparable. TUM RGB-D contains the color and depth images of real trajectories and provides acceleration data from a Kinect sensor. Chao et al. The 216 Standard Colors . $ . The results indicate that the proposed DT-SLAM (mean RMSE = 0:0807. ORB-SLAM2. Traditional visual SLAM algorithms run robustly under the assumption of a static environment, but always fail in dynamic scenarios, since moving objects will impair. tum. By doing this, we get precision close to Stereo mode with greatly reduced computation times. We use the calibration model of OpenCV. The RGB-D images were processed at the 640 ×. tum. The TUM RGB-D dataset , which includes 39 sequences of offices, was selected as the indoor dataset to test the SVG-Loop algorithm. We provide examples to run the SLAM system in the KITTI dataset as stereo or monocular, in the TUM dataset as RGB-D or monocular, and in the EuRoC dataset as stereo or monocular. Content. 01:50:00. tum. Please enter your tum. tum. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article. The network input is the original RGB image, and the output is a segmented image containing semantic labels. TUM RGB-D dataset The TUM RGB-D dataset [14] is widely used for evaluat-ing SLAM systems. Welcome to the self-service portal (SSP) of RBG. Invite others by sharing the room link and access code. . TUM RGB-D Dataset. See the settings file provided for the TUM RGB-D cameras. This is not shown. TUM RBG-D can be used with TUM RGB-D or UZH trajectory evaluation tools and has the following format timestamp[s] tx ty tz qx qy qz qw. 4-linux - optimised for Linux; 2. To address these problems, herein, we present a robust and real-time RGB-D SLAM algorithm that is based on ORBSLAM3. We use the calibration model of OpenCV. 1 On blackboxes in Rechnerhalle; 1. 2023. Includes full time,. It includes 39 indoor scene sequences, of which we selected dynamic sequences to evaluate our system. We provide examples to run the SLAM system in the KITTI dataset as stereo or monocular, in the TUM dataset as RGB-D or monocular, and in the EuRoC dataset as stereo or monocular. Engel, T. The stereo case shows the final trajectory and sparse reconstruction of the sequence 00 from the KITTI dataset [2]. A video conferencing system for online courses — provided by RBG based on BBB. ExpandORB-SLAM2 is a real-time SLAM library for Monocular, Stereo and RGB-D cameras that computes the camera trajectory and a sparse 3D reconstruction (in the stereo and RGB-D case with true scale). The RGB-D case shows the keyframe poses estimated in sequence fr1 room from the TUM RGB-D Dataset [3], andWe provide examples to run the SLAM system in the TUM dataset as RGB-D or monocular, and in the KITTI dataset as stereo or monocular. TE-ORB_SLAM2 is a work that investigate two different methods to improve the tracking of ORB-SLAM2 in. Meanwhile, deep learning caused quite a stir in the area of 3D reconstruction. The RGB and depth images were recorded at frame rate of 30 Hz and a 640 × 480 resolution. depth and RGBDImage. Change password. 89. Among various SLAM datasets, we've selected the datasets provide pose and map information. For interference caused by indoor moving objects, we add the improved lightweight object detection network YOLOv4-tiny to detect dynamic regions, and the dynamic features in the dynamic area are then eliminated in. sequences of some dynamic scenes, and has the accurate. , ORB-SLAM [33]) and the state-of-the-art unsupervised single-view depth prediction network (i. Single-view depth captures the local structure of mid-level regions, including texture-less areas, but the estimated depth lacks global coherence. Students have an ITO account and have bought quota from the Fachschaft. The proposed V-SLAM has been tested on public TUM RGB-D dataset. Digitally Addressable RGB (DRGB) allows you to color each LED individually, rather than choosing one static color for the entire LED strip, meaning you can go full rainbow. de as SSH-Server. In all of our experiments, 3D models are fused using Surfels implemented by ElasticFusion [15]. We evaluated ReFusion on the TUM RGB-D dataset [17], as well as on our own dataset, showing the versatility and robustness of our approach, reaching in several scenes equal or better performance than other dense SLAM approaches. It provides 47 RGB-D sequences with ground-truth pose trajectories recorded with a motion capture system. Motchallenge. It supports various functions such as read_image, write_image, filter_image and draw_geometries. The measurement of the depth images is millimeter. The TUM dataset is divided into high-dynamic datasets and low-dynamic datasets. Among various SLAM datasets, we've selected the datasets provide pose and map information. 1 Performance evaluation on TUM RGB-D dataset The TUM RGB-D dataset was proposed by the TUM Computer Vision Group in 2012, which is frequently used in the SLAM domain [ 6 ]. Usage. Many also prefer TKL and 60% keyboards for the shorter 'throw' distance to the mouse. 3. We recommend that you use the 'xyz' series for your first experiments. tum. 576870 cx = 315. +49. org traffic statisticsLog-in. tum. tum. We increased the localization accuracy and mapping effects compared with two state-of-the-art object SLAM algorithms. The key constituent of simultaneous localization and mapping (SLAM) is the joint optimization of sensor trajectory estimation and 3D map construction. deIm Beschaffungswesen stellt die RBG die vergaberechtskonforme Beschaffung von Hardware und Software sicher und etabliert und betreut TUM-weite Rahmenverträge und. With the advent of smart devices, embedding cameras, inertial measurement units, visual SLAM (vSLAM), and visual-inertial SLAM (viSLAM) are enabling novel general public. public research university in GermanyIt is able to detect loops and relocalize the camera in real time. 6 displays the synthetic images from the public TUM RGB-D dataset. The process of using vision sensors to perform SLAM is particularly called Visual. Die RBG ist die zentrale Koordinationsstelle für CIP/WAP-Anträge an der TUM. Check out our publication page for more details. Red edges indicate high DT errors and yellow edges express low DT errors. The Wiki wiki. Welcome to the RBG-Helpdesk! What kind of assistance do we offer? The Rechnerbetriebsgruppe (RBG) maintaines the infrastructure of the Faculties of Computer. in. sh","path":"_download. 4. 3. 822841 fy = 542. (TUM) RGB-D data set show that the presented scheme outperforms the state-of-art RGB-D SLAM systems in terms of trajectory. color. We extensively evaluate the system on the widely used TUM RGB-D dataset, which contains sequences of small to large-scale indoor environments, with respect to different parameter combinations. de: Technische Universität München: You are here: Foswiki > System Web > Category > UserDocumentationCategory > StandardColors (08 Dec 2016, ProjectContributor) Edit Attach. 近段时间一直在学习高翔博士的《视觉SLAM十四讲》,学了以后发现自己欠缺的东西实在太多,好多都需要深入系统的学习。. de Im Beschaffungswesen stellt die RBG die vergaberechtskonforme Beschaffung von Hardware und Software sicher und etabliert und betreut TUM-weite Rahmenverträge und zugehörige Webshops. Zhang et al. You will need to create a settings file with the calibration of your camera. Registrar: RIPENCC Route: 131. A novel semantic SLAM framework detecting potentially moving elements by Mask R-CNN to achieve robustness in dynamic scenes for RGB-D camera is proposed in this study. 0. While previous datasets were used for object recognition, this dataset is used to understand the geometry of a scene. Choi et al. Previously, I worked on fusing RGB-D data into 3D scene representations in real-time and improving the quality of such reconstructions with various deep learning approaches. 89 papers with code • 0 benchmarks • 20 datasets. in. 03. Telephone: 089 289 18018. Major Features include a modern UI with dark-mode Support and a Live-Chat. de Welcome to the RBG user central. The ground-truth trajectory is obtained from a high-accuracy motion-capture system. Bauer Hörsaal (5602. de belongs to TUM-RBG, DE. /Datasets/Demo folder. Technische Universität München, TU München, TUM), заснований в 1868 році, знаходиться в місті Мюнхені і є єдиним технічним університетом Баварії і одним з найбільших вищих навчальних закладів у. We show. This repository provides a curated list of awesome datasets for Visual Place Recognition (VPR), which is also called loop closure detection (LCD). TUM RGB-D [47] is a dataset containing images which contain colour and depth information collected by a Microsoft Kinect sensor along its ground-truth trajectory. RGB and HEX color codes of TUM colors. 1. 89. We propose a new multi-instance dynamic RGB-D SLAM system using an object-level octree-based volumetric representation. This is not shown. In this paper, we present the TUM RGB-D benchmark for visual odometry and SLAM evaluation and report on the first use-cases and users of it outside our own group. The Dynamic Objects sequences in TUM dataset are used in order to evaluate the performance of SLAM systems in dynamic environments. in. $ . We adopt the TUM RGB-D SLAM data set and benchmark 25,27 to test and validate the approach. The ground-truth trajectory is obtained from a high-accuracy motion-capture system. navab}@tum. , in LDAP and X. You need to be registered for the lecture via TUMonline to get access to the lecture via live. The results indicate that the proposed DT-SLAM (mean RMSE= 0:0807. md","path":"README. However, they lack visual information for scene detail. +49. Choi et al. 223. Our method named DP-SLAM is implemented on the public TUM RGB-D dataset.