• Skip to primary navigation
  • Skip to main content
  • Home
  • Research
  • Publications
  • People
  • Contact Us

Unmanned Systems Lab

Autonomous and Unmanned Vehicles

Texas A&M University College of Engineering

Results 1–10 of 22 for Pharmacy Online ⭐ www.pills2sale.com ⭐ Where To Buy Ranitidine Uk - Where Can I Buy Ranitidine Tablets

S P crater data

This data was collected at SP crater Flagstaff, Arizona on June 5, 2012 and June 7, 2012.

 Table of contents
  1. Images for each filter
  2. Panaromas
  3. Temperature plots

Images for each filter

Here we show one image per filter from the collected data.

450 nm filter

 

532 nm filter

600 nm filter

671 nm filter

750 nm filter

800 nm filter

860 nm filter

900 nm filter

930 nm filter

990 nm filter

ND5 filter

  Panoramas

Here we show the panorama stitched for each filter. For each filter images were collected for five pan – tilt positions. These images were stitched together to get a panaroma.

450 nm panaroma

532 nm panaroma

600 nm panaroma

671 nm panaroma

750 nm panaroma

800 nm panaroma

860 nm panaroma

900 nm panaroma

930 nm panaroma

990 nm panaroma

ND5 panaroma

 Temperature plots

Data was collected on two separate days with three datasets collected each day. The temperature profile during the data collection is shown below.

Day 1 - data 1

Day 1 - data 2

Day 1 - data 3

Day 2 - data 1

Day 2 - data 2

Day 2 - data 3

Past Projects

1Large scale 3D mapping using LIDAR
In this project, we used the Iterative Closest Point (ICP) algorithm to build a mapping pipeline that can construct 3D maps from LIDAR data. This pipeline has been tested on data from Velodyne and RIEGL LIDARs. The Velodyne LIDAR (VLP16) was mounted on a car and was driven around, while GPS was NOT used. Final trajectories and maps were constructed from this LIDAR data.

 

Vision based GPS-denied Object Tracking and Following for UAV
In this project, we present a vision based control strategy for tracking and following objects using an Unmanned Aerial Vehicle. We have developed an image based visual servoing method that uses only a forward looking camera for tracking and following user-specified objects from a multi-rotor UAV continuously while maintaining a fixed distance from the object and also simultaneously keeping it in the center of the image plane; without any dependence on GPS systems. The algorithm is validated using a Parrot AR Drone 2.0 in outdoor conditions while tracking and following people and other static or fast moving objects, while showing the robustness of the proposed system against perturbations and illumination changes and occlusions. Please visit the project page for more details.

 

Ars Robotica2
Ars Robotica is a collaboration between Unmanned Systems Lab and the School of Film, Dance and Theatre at Arizona State University. Using the Rethink Robotics Baxter as a test platform, Ars Robotica aims to investigate the possibility of defining and achieving a human quality of movement through robots, and validating it through the idea of viewing a robot as a performer in theater. Training data is obtained through various modes of sensing ranging from simple devices such as a Microsoft Kinect; to high speed precise tracking setups such as a 12 camera Optitrack system; which is then used for defining a vocabulary of human motion primitives, thus helping create a framework for autonomous interpretation and expression of human-like motion through Baxter. Please visit the project page for more details.


heli
Terrain Mapping using UAVs
Three dimensional mapping is an extremely important aspect of geological surveying. Current methods of doing this, however, often poses pragmatic challenges. We introduce a technique for performing terrain mapping using unmanned aerial vehicles (UAVs) and standard digital cameras. Using a photogrammetric process called structure from motion (SFM), aerial images can be used to infer 3-dimensional data. Please visit the project page for more details.

 


ASU AutokiteAutonomous Kite Plane for Aerial Surveillance

The development of an autonomous fixed-wing motorized kite plane, Autokite, offers a unique approach to aerial photography in the field.  The inexpensive and lightweight nature of the Autokite makes it ideal for deployment in environments that are remote and or extreme.

 
 
 

Autonomous Ship Landing Red

Autonomous ship board landing of a VTOL UAV
The autonomous landing of Vertical Take Off and Landing (VTOL) Unmanned Aerial Vehicles (UAVs) is a very important capability for autonomous systems. Autonomously landing on a ship deck platform continues to be studied and has only recently been solved for very favorable weather conditions.
Our challenge is to provide the UAV with the capabilities of autonomously landing on ship deck platforms in extreme weather conditions.

 

eggs

EGGS (Exploration Geology & Geophysics Sensors) 
The EGGS Project, Exploration Geology & Geophysics Sensors, aims to develop a diverse set of robust self-righting multi-purpose data-collection platforms capable of assisting scientists/explorers in the field on Earth or through remote deployments to near-by asteroids. With an integrated camera, microscope, accelerometer, magnetometer, and configurations for adding other instruments, EGGS are a low-cost 3D printable option for students, researchers, and enthusiasts who want to learn more about an environment remotely.


oktokopterUsing UAVs to Assesses Signal Strength Patterns for Radio Telescopes
In this work the design of flight hardware for detecting the signal strength field pattern of an array of Radio Telescopes is considered.  Utilizing the ultra-stable and robust aerial platform offered by a multi-rotor craft makes this task possible.

 
 
 


eg-DEM
Change Detection using airborne Lidar
In the course of this project, we worked with geologists on developing algorithms for finding the local displacements on topographies during earth quakes. Algorithms use the Digital Elevation Models of earthquake sites (before and after the earthquake) obtained from Lidar scanners mounted on aerial vehicles. Please visit the
project page for more details.

 


nirNIR Camera
The objective of the NIR project involved constructing an equivalent MER PANCAM from readily available commercial parts for use of science and study of Earth’s atmosphere and geological features. Please visit the project page for more details.

 
 
 


raven-path
Path Planning for Ground Vehicles
The objective of this project was to study and devise new means for motion planning for ground vehicles, using a rover named Raven as the prototype vehicle. More specifically, we try to determine smooth paths for Raven to follow, as it traverses waypoints; such paths have wide use in applications, for instance  in following an astronaut as (s)he walks along a random path. Please visit the Project Page for more details.

 
 


plumes
Plume Detection
The objective of this work was to autonomously detect manually verified features (plumes) in images under onboard conditions. Success enables these methods to be applied to future outer solar system missions and facilitates onboard autonomous detection of transient events and features regardless of viewing and illumination effects, electronic interference, and physical image artifacts. Autonomous detection allows the maximization of the spacecrafts memory capacity and downlink bandwidth by prioritizing data of utmost scientific significance. Please visit the project page for more details.

 

ravenR.A.V.E.N.
RAVEN (Robotic Assist Vehicle for Extraterrestrial Navigation) was designed for the 2010 Revolutionary Aerospace Systems Concepts Academic Linkage (RASC-AL) contest.  Please visit the project page for more details.

 
 


roadmapRoad Detection from UAV Aerial Imagery

Using aerial images taken from UAV photography to detect the presence of roads.  In this work, we developed variations of algorithms suitable for different types of roads and detection.  Please visit the project page for more details.

 
 
 
 


subAutonomous Sampling
Autonomous Underwater Vehicles have proven themselves to be indispensable tools for mapping and sampling aquatic environments. However these sensing platforms can only travel as far as their stored energy capacities allow them to. Thus we have researched both offline and online adaptive sampling strategies that optimize both the estimation accuracy of the models derived from sampling and the energy consumption of the vehicle through.  Please visit the project page for more details.

3D Mapping

Current methods of large scale terrain modeling can be cost and time prohibitive. We present a method for integrating low cost cameras and unmanned aerial vehicles for the purpose of 3D terrain mapping. Using structure from motion, aerial images taken of the landscape can be reconstructed into 3D models of the terrain. This process is well suited for use on unmanned aerial vehicles due to the light weight and low cost of equipment.

Image 1: The attitude and position of the helicopter while taking images is represented by the blue rectangles. This demonstrates that the GPS/INS information is not needed for creating the map, but, rather, extraction of both the map and the attitude + position of the helicopter is possible simultaneously.

– Attitude Information via Photogrammetry

 

 

The attitude and position of the helicopter while taking images is represented by the blue rectangles. This demonstrates that the GPS/INS information is not needed for creating the map, but, rather, extraction of both the map and the attitude + position of the helicopter is possible simultaneously.

The following models were generated from flights at the White Sands Missile Range (WSMR) in Las Cruces, New Mexico. They were created in real-time, using vision based 3D mapping. Covering an area approximately 2km x 1km, the UAV flew at an altitude of 600m AGL. The point cloud models display the geometric locations of interpolated points. Models with images draped over are also seen below.

 

– Point Cloud Model

WSMR – Las Cruces, NM

 

 

 

 

– Model with Image draped over

Las Cruces, NM

 

 

– Point Cloud generated from               Las Cruces flights

 

Model generated at White Sands Missile Range –

 

Comparisons of this technique have also been performed. The image below compares DEMs produced at ranging resolutions. All DEMs were generated using vision based, 3D mapping techniques. The area shown is 33.4592 N, 115.8565 W in the Salton Sea State Recreation Park in Palm Springs, CA.

– Aerial photo of region- 10 cm DEM

– 25 cm DEM

– 1 m DEM

 

 

 

 

Vision based models from this area were also compared to LIDAR datasets. The image below shows a DEM generated from each technique.

 

 

 

 

Vision based GPS-denied Object Tracking and Following for Unmanned Aerial Vehicles


1. Abstract

We present a vision based control strategy for tracking and following objects using an Unmanned Aerial Vehicle. We have developed an image based visual servoing method that uses only a forward looking camera for tracking and following objects from a multi-rotor UAV, without any dependence on GPS systems. Our proposed method tracks a user specified object continuously while maintaining a fixed distance from the object and also simultaneously keeping it in the center of the image plane. The algorithm is validated using a Parrot AR Drone 2.0 in outdoor conditions while tracking and following people, occlusions and also fast moving objects; showing the robustness of the proposed systems against perturbations and illumination changes. Our experiments show that the system is able to track a great variety of objects present in suburban areas, among others: people, windows, AC machines, cars and plants.

The code of this project has been made open-source under BSD license and it is available in GitHub in the following site:

https://github.com/Vision4UAV/cvg_ardrone2_ibvs

The images of the videos shown in 3.1 through 3.5 are made public as datasets, which can be found in the following addresses:

  • In the CVG-UPM ftp (user:publicftp , pass:usuarioftp): ftp://138.100.76.91/CVG_PUBLIC/DataSets/IBVS_ardrone2_datasets/
  • In google drive: http://bit.ly/1sp3qxG

(left) Image of the AR Drone 2.0 during one of the Visual Servoing experiments. (right) Modified front image of the drone to show the cotroller (green) references, (blue) feedback, and (red) control error. The drone is controlled from an off-board computer through a WiFi link, and the target object is selected by the experimenter. Several experiments were conducted on a suburban area, testing our object following software architecture on various objects, such as: people, cars and windows among others.

2. Motivation

The motivation of this work is to show that Visual Object Tracking can be a reliable source of information for Unmanned Air Vehicles (UAV) to perform visually guided tasks on GPS-denied unstructured outdoors environments. Navigating populated areas is more challenging to a flying robot than to a ground robot because it requires to stabilize itself at all moments; in addition to the other usual robotics operations. This provides a second objective to the presented work to show that Visual Servoing, or positioning a VTOL UAV relative to an object at an approximate fixed distance, is possible for a great variety of objects. The capability of autonomous tracking and following of arbitrary objects is interesting by itself; because it can be directly applied to visual inspection among other civilian tasks.

Important

The work shown on this website is currently being submitted for peer review in various conferences. We will add more information as we get feedback from our submissions.

3. Videos

  • Videos on the 3.5 section include decoupling heuristics on the controller
  • Videos 3.1 through 3.4, tests performed from 28 June 2013 to 12 July 2013
  • All videos are recorded in real-time (the logged frames were shyncronized using our logs), so that poor or lost WiFi connections are visible, and have occurred when the video freezes.
  • Sometimes the on-board videos show incorrectly f_yr, the vertical image feature reference. In those cases the videos have been watermarked showing the correct control error in the lower right of the video.
  • The videos are long because they show complete tests. This way the viewer can judge the performance of the system based on these experiments.

3.5 Tests on person following with decoupling heuristics on the controller

3.4 Tests on person following where our system was tested against target occlusion

3.3 Tests on car and person following

3.2 Tests on a suburban area selecting arbitrary objects/targets from the street

3.1 Tests on target that matches controller’s tunning expected size and distance

4 Publications

  • Jesus Pestana, Jose Luis Sanchez-Lopez, Srikanth Saripalli, and Pascual Campoy. “Computer vision based general object following for GPS-denied multirotor unmanned vehicles.” In American Control Conference (ACC), 2014, pp. 1886-1891. IEEE, 2014.
  • Jesus Pestana, Jose Luis Sanchez-Lopez, Pascual Campoy, and Srikanth Saripalli. “Vision based GPS-denied Object Tracking and following for unmanned aerial vehicles.” In Safety, Security, and Rescue Robotics (SSRR), 2013 IEEE International Symposium on, pp. 1-6. IEEE, 2013.

5. Researchers/Authors

The PhD Students and Researchers that have actively worked on this project are:

  • Msc. Jesús Pestana Puerta (PhD. Candidate at CVG, CAR, CSIC-UPM).
  • Msc. José Luis Sanchez-Lopez (PhD. Candidate at CVG, CAR, CSIC-UPM).
  • Professor Dr. Srikanth Saripalli (Texas A&M).
  • Professor Dr. Pascual Campoy (CVG, CAR, CSIC-UPM).

6. Other collaborators

  • Patrick McGarey and Msc. Mariusz Wzorek : help and advice during experimental testing.
  • Msc. Ignacio Mellado and PhD Iván F. Mondragón B.: extensive shared experience on vision based real-time robotics and multirotors in particular.

Change Detection using Airborne Systems

 Table of contents
  1. Objective
  2. Problem
  3. Experiments on B4 dataset
  4. Experiments on El Mayor Cucupah dataset
  5. Finding the right window size
  6. Detecting regions containing the fault
  7. Further directions
  8. Related publications
  9. Source code and Test dataset

Objective

The objective of this project is to determine 3-dimensional, local ground displacements caused by an earth-quake. The technique requires pre- and post-earthquake point cloud datasets, such as those collected using air-borne Light Detection and Ranging (Lidar) or SFM generated point clouds from aerial images. A typical aerial point cloud data of a terrain is shown in the figure below.

eg-DEM

Aerial Point Cloud of a Site

We used the publicly available B4 Lidar and El Mayor Cucupah datasets. B4 dataset covers the San Andreas Fault System of Central and Southern California. El Mayor Cucupah dataset gives the pre and post earthquake data of the April 4 2010 earthquake that was felt throughout Southern California, Arizona, Nevada, and Baja California Norte, Mexico.
<< Table Of Contents >>

Problem

This problem is formulated as a point cloud registration problem in which the full point cloud is divided into smaller windows, for which the local displacement that best restores the post earthquake point cloud onto its pre-earthquake equivalent must be found. We used the ICP algorithm to register point clouds.

Initially we introduced a synthetic earthquake on the B4 lidar dataset wherein displacement vectors were introduced on different regions of the terrain and the resulting displacement vectors from ICP were compared with the input displacement vectors. Later we proceeded to test this approach on the El Mayor Cucupah dataset (a real earthquake dataset) and some observations on the resulting displacements were made.
<< Table Of Contents >>

Experiments on the B4 dataset (Synthetic Earthquake)

Two experiments were conducted with different displacement fields and the results are shown in the figures below.  Black arrows are horizontal displacements and coloured circles denote vertical displacements. The artificial fault introduced is shown as a dotted line, and points on the right side of the fault were given a south east transformation and the points to the left were given a north west transformation. A vertical z displacement was also introduced.

Earthquake with simple displacement vectors

Earthquake with different displacement vectors

<< Table Of Contents >>

Experiments on the El Mayor Cucupah dataset (Real Earthquake dataset)

We considered a 2 km x 2 km region in this dataset and presented initial results for the same. The figure below shows the results after registration wherein. The differences in these displacements are clear on either side of the fault (shaded in green).


<< Table Of Contents >>

Finding the right window size

In the above experiments we assume a fixed window size for registration. We also  investigated an algorithmic way of finding the right window size for registration. We began by choosing an arbitrary window size in the source cloud (e.g. 200 m × 200 m). For each of these windows, the corresponding window in the target data is identified based on x and y coordinates. Next, we computed the rigid body transformation between the source and target windows using the ICP algorithm. This window is split into four smaller windows of equal size and the rigid body transformation is computed on every child window. The transformation is validated after each split and the associated error computed. Based on the differences in error after consecutive splits, we decide whether further splitting is necessary. We verified experimentally that we cannot have small errors for very small window sizes (∼10 m) given the point cloud densities and input displacements. An analysis of this error indicates when to stop splitting yielding the right window size. A detailed description of this approach can be seen in our paper.
<< Table Of Contents >>

Detecting regions containing the fault

An interesting question to answer in earthquake datasets is ‘Can we find the regions containing a fault?’. Our windowing approach to change detection enabled us to examine this and we came up with an information theoretic approach to classify the windows containing a fault. The following image shows the dataset being split up into multiple windows with the thick black line showing the fault. The results of the information theoretic approach extracting windows containing the fault is shown in the right. A detailed description of this approach can be seen in our paper.

Left – The point cloud split into different windows; Right – Fault detection mechanism extracting windows containing the fault

These results were submitted to the International Symposium of Experimental Robotics, 2012 and is under review.
<< Table Of Contents >>

Further directions

We are now running experiments on the entire dataset covering 850 sq km. One more direction we are looking at is to use SFM generated point clouds instead of Lidar. This offers significant advantages in terms of the cost involved.

The following image shows the point cloud generated using aerial image sequences over a  strip of land obtained from a helicopter. This strip of land coincides with a part of the data in the B4 lidar dataset.

 

The B4 lidar and SFM pointclouds have different co-ordinate systems and before we do change detection, the first task is to align the two point clouds globally. We are considering this problem as a template fitting problem. The best fit for the SFM point cloud in the lidar point cloud is computed and change detection is done after the template fit. We are still working on the results.
<< Table Of Contents >>

Related publications

  • Change detection using Airborne Systems : Applications to Earthquakes   Aravindhan K Krishnan, Edwin Nissen, Srikanth Saripalli and Ramon Arrowsmith. International Symposium on Experimental Robotics (ISER) 2012
  • Three-dimensional coseismic surface displacements and rotations from pre- and post-earthquake Lidar point clouds
    Edwin Nissen, Aravindhan K Krishnan, Ramon Arrowsmith and Srikanth Saripalli
    Geophysical Research Letters, 2012

Source code and Test dataset

The source code can be downloaded here. The test dataset can be downloaded here.

Collaborative Localization

The main aim of this project is to create a framework that can perform collaborative localization between groups of micro aerial vehicle (multirotor vehicles) using monocular cameras as the only sensors.

Especially in the context of UAV swarms, which is rapidly becoming a popular idea in robotics, the focus is usually on small platforms with limited sensory payload and computational capacity. In the context of such groups, having each vehicle run its own version of localization algorithms such as SLAM can be computationally intensive. Although monocular cameras are ubiquitous in current day MAV platforms, monocular cameras are unable to resolve scale on their own, which necessitates further sensor fusion. Given these challenges, the idea of collaboration can be desirable as it has the potential to reduce computation as well as improve localization accuracy: relative measurements can be fused with individual measurements to reduce estimation error over a group. Collaboration also allows for localizing all vehicles in one frame of reference, which is advantageous for applications such as formation control.

In this algorithm, we initially detect features from all cameras and extract common features through matching. The matches are then propagated through an adaptive RANSAC technique [ref] which results in the creation of a map. Once created, this map is then accessible from all vehicles, allowing each vehicle to perform its own localization through feature tracking and 3D-2D correspondences. We assume that the metric distance and heading between at least two vehicles is known before initializing the algorithm in order to have an estimate of the scale. As the vehicles move around, if the number of tracked features falls below a certain threshold, the vehicles ‘collaborate’ once more to match common features and update the global map. Thus, continuous communication between vehicles is not necessary. The frequency at which the map update happens depends on factors such as how fast the movement of the UAVs is, how quickly the environment can change etc. We are currently also looking into techniques such as covariance intersection so that two vehicles can use relative measurements between each other if needed, without having to perform a full map update.

Recently, we have tested this algorithm using Microsoft AirSim, the UAV simulator which was modified in order to simulate multiple vehicles. AirSim uses Unreal Engine as its base, a high fidelity videogame enigne with features such as high resolution textures, realistic shadows, post processing and visual effects. A sample video of localization being performed using AirSim images can be seen below.

Autonomous Sampling

Autonomous vehicles are very useful for collecting data in a plethora of environments of scientific interest (land, air, and sea) . However, the platform being used to collect the sensing data is limited by the amount of stored energy that is carried aboard the vehicle. The sensing platform can only travel and sense as far as its stored energy capacity will allow it to. The question then becomes how one obtains the most accurate estimation of the scalar field while being limited by the stored energy constraints of the vehicle being used to sample the scalar field.

iver

 

We have experimentally evaluated various sampling strategies based upon their relative estimation accuracy and energy consumption with an AUV (pictured above). Initially we are evaluating offline sampling path strategies.The sampling path strategies that have been evaluated are systematic sampling and random lawnmower and spiral sample paths.

Real world data collected by an AUV was used to generate the data used in the experimental evaluation.

Through the results of the estimation error evaluation, we hypothesize that the systematic sampling strategies provide better estimation errors for both isotropic and anisotropic scalar fields for sparse sampling densities, and the random stratified sampling strategies provide better estimation errors for dense sampling densities.

NIR Camera

Near Infra-Red multispectral Camera

Objective: Construct an equivalent MER PANCAM from readily available commercial parts for use of science and study of Earth’s atmosphere and geological features.

NIR Camera

MER Pancam – Overview

 

  • A pair of high resolution, stereo, color CCD cameras present in Spirit and Opportunity Mars Exploration Rovers (MERs) to obtain panoramic multispectral images of the Martian sky and surface.
  • Geology/Geomorphology: Past history of a landing site by generating a panoramic view of the surrounding terrain spanning full 360 degrees. It also provides fine-scale geological information of rock and soil attributes on the Martian terrain.
  • Color/composition: The multispectral capability of the camera helps in determining the soil mineralogy, the various weathering processes that could have affected the terrain and the determination of suitable target spots for further research.
  • Photometry: It helps in obtaining physical properties of the terrain such as grain size and roughness by exploiting the reflectance properties of rocks and soils by choosing appropriate incidence and emission angles.
  • Sun/Sky imaging: To monitor dust and cloud opacity.

Camera Comparison:

Point Grey Grasshopper Camera Key Specifications:

  • GRAS-14S5M/C ICX285 2/3” (EXView HAD)
  • Power Consumption: 3.1W
  • Mass: 104g (without optics)
  • Operating Temp: 0 deg C to 40 deg C
  • The corresponding focal length for ICX285 required to obtain an angular resolution of 0.28mrad/pixel is 23mm.
ATTRIBUTES ICX285 PANCAM
Sensor size 2/3” 1”
Pixel Size 6um 12um
Maximum  Resolution 1384×1036 1024×1024
Wavelengths for spectral sensitivity > 0.5 400nm – 760nm 580nm – 900nm
Spectral sensitivity at 1000nm 0.06 0.2

NIR Camera Field Testing

NIR Camera mounted on NASA's K10 rover for field testing

NIR False Color Image

NIR false color panorama image taken at NASA AMES facility in CA.

NIR false color with plant life highlighted panorama image taken at NASA AMES facility in CA.

 

SP crater data

The data collected using this camera in SP crater, Flagstaff, Arizona can be found here

 

Autonomous ship board landing of a VTOL UAV

OBJECTIVE

The main objective is provide to a Vertical Take off and Landing Unmanned Aerial Vehicle (VTOL-UAV) the capability of autonomously land on a ship deck without human intervention and in a safety way.

Autonomous Ship Landing

 

PROJECT OVERVIEW

The ASTRIL research group has two VTOL UAV

  • A Rotomotion SR30
  • A Helipse HE300

These two helicopters are equiped with an autopilot that allow us to send it high-level control commands and simplifies the control task. To simulate the movement of the ship deck, the ASTRIL research group has a Servos & Simulation, Inc six axis motion platform.

The main steps in this projects are the following:

1. Ship deck simulation

In this step, the real movement of a ship on the sea is simulated. This movement, depends on the Sea State, the wave direction and of course, the ship. Once we have the offline simulated movement of the ship, it has to be implemented in the motion platform calculating the Inverse Kinematics and avoiding the singular configurations.

2. Measurement of the pose of the UAV respect to the ship deck

This step consist on measure the pose (position and orientation) of the UAV, respect to the pose to the ship deck. Our first proposal is use a computer vision system made up for a single Point Gray, Inc camera. As the size of the landing platform is known, the 3D reconstruction is feasible.

We are also studying the use of a stero vision system or a mixed camera-lidar system.

3. State Estimation of the VTOL and the Ship Deck

As the measure could have noise, and could be lost during a period of time, a State Estimator is needed, to achieve a reliable state estimation for the next steps

4. Autonomous Landing Controller and Simulation

In this step the controller is designed and tested thanks to the previous simulation of the ship deck and a simulation of the VTOL UAV.

Our proposal is a fuzzy logic controller, but we are still working on that.

5. Implementation in the real UAVs and Tests with the motion platform

This is the last step that concludes the project.

 

LINKS

See www.vision4uav.com/?q=platform_landing and www.vision4uav.com/?q=node/340

 

RESEARCHERS

The main researchers of this project are:

  • Jose Luis Sanchez-Lopez (PhD. Candidate at UPM).
  • Jesus Pestana (PhD. Candidate at UPM).
  • Professor Dr. Srikanth Saripalli (Texas A&M).

And as Jose Luis’ Supervisor at UPM:

  • Professor Dr. Pascual Campoy (UPM)

 

TIMELINE

The research of this project began on July 2012 with a Jose Luis Stay at ASTRIL.

 

PUBLICATIONS

  • J. L. Sanchez-Lopez, J. Pestana, S. Saripalli, P. Campoy. An Approach Towards Visual Autonomous ship board landing of a VTOL UAV. Journal of Intelligent and Robotic Systems. April 2014. Volume 74, Issue 1-2, pp. 113-127. Springer Netherlands. Print ISSN: 0921-0296. Online ISSN: 1573-0409. IF=0.827, Q3.
  • J. L. Sanchez-Lopez, S. Saripalli, P. Campoy, J. Pestana, C. Fu. Visual Autonomous Ship Board Landing of a VTOL UAV. 2013 International Conference on Unmanned Aircraft Systems (ICUAS’13). Atlanta, Georgia (USA). May 28-31, 2013.
  • J. L. Sanchez-Lopez, S. Saripalli, P. Campoy. Autonomous ship board landing of a VTOL UAV. AHS 69 Annual Forum 2013. Phoenix, Arizona (USA). May 21-23, 2013.

RAVEN

RAVEN (Robotic Assist Vehicle for Extraterrestrial Navigation) in the 2010 Revolutionary Aerospace Systems Concepts Academic Linkage (RASC-AL) contest.

RAVEN @ Sunset

RAVEN posing after a long field test in Gila Bend Arizona

Key Specifications:

  • Three-wheel design
  • 330-pound (150-kg) rover
  • Traverses 20 degree slopes
  • Travels at speeds up to 3 feet/second (1m/s)

 

 

Field Testing

RAVEN Field Testing with astronaut

RAVEN Field Testing with astronaut at night

RAVEN beneath the stars in Gila Bend, AZ

RAVEN beneath the stars in Gila Bend, AZ

  • 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

© 2016–2021 Unmanned Systems Lab Log in

  • College of Engineering
  • Facebook
  • Twitter
  • State of Texas
  • Texas Homeland Security
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment