• Skip to primary navigation
  • Skip to main content
  • Home
  • Research
  • Publications
  • People
  • Contact Us

Unmanned Systems Lab

Autonomous and Unmanned Vehicles

Texas A&M University College of Engineering

Results 1–4 of 4 for PayPal Online Pharmacy ⭐ www.HealthMeds.online ⭐ Kamagra Oral Jelly Uk Paypal - Kamagra Oral Jelly Paypal Kaufen

Autonomous ship board landing of a VTOL UAV

OBJECTIVE

The main objective is provide to a Vertical Take off and Landing Unmanned Aerial Vehicle (VTOL-UAV) the capability of autonomously land on a ship deck without human intervention and in a safety way.

Autonomous Ship Landing

 

PROJECT OVERVIEW

The ASTRIL research group has two VTOL UAV

  • A Rotomotion SR30
  • A Helipse HE300

These two helicopters are equiped with an autopilot that allow us to send it high-level control commands and simplifies the control task. To simulate the movement of the ship deck, the ASTRIL research group has a Servos & Simulation, Inc six axis motion platform.

The main steps in this projects are the following:

1. Ship deck simulation

In this step, the real movement of a ship on the sea is simulated. This movement, depends on the Sea State, the wave direction and of course, the ship. Once we have the offline simulated movement of the ship, it has to be implemented in the motion platform calculating the Inverse Kinematics and avoiding the singular configurations.

2. Measurement of the pose of the UAV respect to the ship deck

This step consist on measure the pose (position and orientation) of the UAV, respect to the pose to the ship deck. Our first proposal is use a computer vision system made up for a single Point Gray, Inc camera. As the size of the landing platform is known, the 3D reconstruction is feasible.

We are also studying the use of a stero vision system or a mixed camera-lidar system.

3. State Estimation of the VTOL and the Ship Deck

As the measure could have noise, and could be lost during a period of time, a State Estimator is needed, to achieve a reliable state estimation for the next steps

4. Autonomous Landing Controller and Simulation

In this step the controller is designed and tested thanks to the previous simulation of the ship deck and a simulation of the VTOL UAV.

Our proposal is a fuzzy logic controller, but we are still working on that.

5. Implementation in the real UAVs and Tests with the motion platform

This is the last step that concludes the project.

 

LINKS

See www.vision4uav.com/?q=platform_landing and www.vision4uav.com/?q=node/340

 

RESEARCHERS

The main researchers of this project are:

  • Jose Luis Sanchez-Lopez (PhD. Candidate at UPM).
  • Jesus Pestana (PhD. Candidate at UPM).
  • Professor Dr. Srikanth Saripalli (Texas A&M).

And as Jose Luis’ Supervisor at UPM:

  • Professor Dr. Pascual Campoy (UPM)

 

TIMELINE

The research of this project began on July 2012 with a Jose Luis Stay at ASTRIL.

 

PUBLICATIONS

  • J. L. Sanchez-Lopez, J. Pestana, S. Saripalli, P. Campoy. An Approach Towards Visual Autonomous ship board landing of a VTOL UAV. Journal of Intelligent and Robotic Systems. April 2014. Volume 74, Issue 1-2, pp. 113-127. Springer Netherlands. Print ISSN: 0921-0296. Online ISSN: 1573-0409. IF=0.827, Q3.
  • J. L. Sanchez-Lopez, S. Saripalli, P. Campoy, J. Pestana, C. Fu. Visual Autonomous Ship Board Landing of a VTOL UAV. 2013 International Conference on Unmanned Aircraft Systems (ICUAS’13). Atlanta, Georgia (USA). May 28-31, 2013.
  • J. L. Sanchez-Lopez, S. Saripalli, P. Campoy. Autonomous ship board landing of a VTOL UAV. AHS 69 Annual Forum 2013. Phoenix, Arizona (USA). May 21-23, 2013.

1/10th scale waypoint following

Waypoint following using Intel’s Euclid and LIDAR on a 1/10th scale RC car.

High speed waypoint following

Collaborative Localization

The main aim of this project is to create a framework that can perform collaborative localization between groups of micro aerial vehicle (multirotor vehicles) using monocular cameras as the only sensors.

Especially in the context of UAV swarms, which is rapidly becoming a popular idea in robotics, the focus is usually on small platforms with limited sensory payload and computational capacity. In the context of such groups, having each vehicle run its own version of localization algorithms such as SLAM can be computationally intensive. Although monocular cameras are ubiquitous in current day MAV platforms, monocular cameras are unable to resolve scale on their own, which necessitates further sensor fusion. Given these challenges, the idea of collaboration can be desirable as it has the potential to reduce computation as well as improve localization accuracy: relative measurements can be fused with individual measurements to reduce estimation error over a group. Collaboration also allows for localizing all vehicles in one frame of reference, which is advantageous for applications such as formation control.

In this algorithm, we initially detect features from all cameras and extract common features through matching. The matches are then propagated through an adaptive RANSAC technique [ref] which results in the creation of a map. Once created, this map is then accessible from all vehicles, allowing each vehicle to perform its own localization through feature tracking and 3D-2D correspondences. We assume that the metric distance and heading between at least two vehicles is known before initializing the algorithm in order to have an estimate of the scale. As the vehicles move around, if the number of tracked features falls below a certain threshold, the vehicles ‘collaborate’ once more to match common features and update the global map. Thus, continuous communication between vehicles is not necessary. The frequency at which the map update happens depends on factors such as how fast the movement of the UAVs is, how quickly the environment can change etc. We are currently also looking into techniques such as covariance intersection so that two vehicles can use relative measurements between each other if needed, without having to perform a full map update.

Recently, we have tested this algorithm using Microsoft AirSim, the UAV simulator which was modified in order to simulate multiple vehicles. AirSim uses Unreal Engine as its base, a high fidelity videogame enigne with features such as high resolution textures, realistic shadows, post processing and visual effects. A sample video of localization being performed using AirSim images can be seen below.

Past Projects

1Large scale 3D mapping using LIDAR
In this project, we used the Iterative Closest Point (ICP) algorithm to build a mapping pipeline that can construct 3D maps from LIDAR data. This pipeline has been tested on data from Velodyne and RIEGL LIDARs. The Velodyne LIDAR (VLP16) was mounted on a car and was driven around, while GPS was NOT used. Final trajectories and maps were constructed from this LIDAR data.

 

Vision based GPS-denied Object Tracking and Following for UAV
In this project, we present a vision based control strategy for tracking and following objects using an Unmanned Aerial Vehicle. We have developed an image based visual servoing method that uses only a forward looking camera for tracking and following user-specified objects from a multi-rotor UAV continuously while maintaining a fixed distance from the object and also simultaneously keeping it in the center of the image plane; without any dependence on GPS systems. The algorithm is validated using a Parrot AR Drone 2.0 in outdoor conditions while tracking and following people and other static or fast moving objects, while showing the robustness of the proposed system against perturbations and illumination changes and occlusions. Please visit the project page for more details.

 

Ars Robotica2
Ars Robotica is a collaboration between Unmanned Systems Lab and the School of Film, Dance and Theatre at Arizona State University. Using the Rethink Robotics Baxter as a test platform, Ars Robotica aims to investigate the possibility of defining and achieving a human quality of movement through robots, and validating it through the idea of viewing a robot as a performer in theater. Training data is obtained through various modes of sensing ranging from simple devices such as a Microsoft Kinect; to high speed precise tracking setups such as a 12 camera Optitrack system; which is then used for defining a vocabulary of human motion primitives, thus helping create a framework for autonomous interpretation and expression of human-like motion through Baxter. Please visit the project page for more details.


heli
Terrain Mapping using UAVs
Three dimensional mapping is an extremely important aspect of geological surveying. Current methods of doing this, however, often poses pragmatic challenges. We introduce a technique for performing terrain mapping using unmanned aerial vehicles (UAVs) and standard digital cameras. Using a photogrammetric process called structure from motion (SFM), aerial images can be used to infer 3-dimensional data. Please visit the project page for more details.

 


ASU AutokiteAutonomous Kite Plane for Aerial Surveillance

The development of an autonomous fixed-wing motorized kite plane, Autokite, offers a unique approach to aerial photography in the field.  The inexpensive and lightweight nature of the Autokite makes it ideal for deployment in environments that are remote and or extreme.

 
 
 

Autonomous Ship Landing Red

Autonomous ship board landing of a VTOL UAV
The autonomous landing of Vertical Take Off and Landing (VTOL) Unmanned Aerial Vehicles (UAVs) is a very important capability for autonomous systems. Autonomously landing on a ship deck platform continues to be studied and has only recently been solved for very favorable weather conditions.
Our challenge is to provide the UAV with the capabilities of autonomously landing on ship deck platforms in extreme weather conditions.

 

eggs

EGGS (Exploration Geology & Geophysics Sensors) 
The EGGS Project, Exploration Geology & Geophysics Sensors, aims to develop a diverse set of robust self-righting multi-purpose data-collection platforms capable of assisting scientists/explorers in the field on Earth or through remote deployments to near-by asteroids. With an integrated camera, microscope, accelerometer, magnetometer, and configurations for adding other instruments, EGGS are a low-cost 3D printable option for students, researchers, and enthusiasts who want to learn more about an environment remotely.


oktokopterUsing UAVs to Assesses Signal Strength Patterns for Radio Telescopes
In this work the design of flight hardware for detecting the signal strength field pattern of an array of Radio Telescopes is considered.  Utilizing the ultra-stable and robust aerial platform offered by a multi-rotor craft makes this task possible.

 
 
 


eg-DEM
Change Detection using airborne Lidar
In the course of this project, we worked with geologists on developing algorithms for finding the local displacements on topographies during earth quakes. Algorithms use the Digital Elevation Models of earthquake sites (before and after the earthquake) obtained from Lidar scanners mounted on aerial vehicles. Please visit the
project page for more details.

 


nirNIR Camera
The objective of the NIR project involved constructing an equivalent MER PANCAM from readily available commercial parts for use of science and study of Earth’s atmosphere and geological features. Please visit the project page for more details.

 
 
 


raven-path
Path Planning for Ground Vehicles
The objective of this project was to study and devise new means for motion planning for ground vehicles, using a rover named Raven as the prototype vehicle. More specifically, we try to determine smooth paths for Raven to follow, as it traverses waypoints; such paths have wide use in applications, for instance  in following an astronaut as (s)he walks along a random path. Please visit the Project Page for more details.

 
 


plumes
Plume Detection
The objective of this work was to autonomously detect manually verified features (plumes) in images under onboard conditions. Success enables these methods to be applied to future outer solar system missions and facilitates onboard autonomous detection of transient events and features regardless of viewing and illumination effects, electronic interference, and physical image artifacts. Autonomous detection allows the maximization of the spacecrafts memory capacity and downlink bandwidth by prioritizing data of utmost scientific significance. Please visit the project page for more details.

 

ravenR.A.V.E.N.
RAVEN (Robotic Assist Vehicle for Extraterrestrial Navigation) was designed for the 2010 Revolutionary Aerospace Systems Concepts Academic Linkage (RASC-AL) contest.  Please visit the project page for more details.

 
 


roadmapRoad Detection from UAV Aerial Imagery

Using aerial images taken from UAV photography to detect the presence of roads.  In this work, we developed variations of algorithms suitable for different types of roads and detection.  Please visit the project page for more details.

 
 
 
 


subAutonomous Sampling
Autonomous Underwater Vehicles have proven themselves to be indispensable tools for mapping and sampling aquatic environments. However these sensing platforms can only travel as far as their stored energy capacities allow them to. Thus we have researched both offline and online adaptive sampling strategies that optimize both the estimation accuracy of the models derived from sampling and the energy consumption of the vehicle through.  Please visit the project page for more details.

© 2016–2021 Unmanned Systems Lab Log in

  • College of Engineering
  • Facebook
  • Twitter
  • State of Texas
  • Texas Homeland Security
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment