Abstract:The Tennessee River in the United States is one of the most ecologically distinct rivers in the world and serves as a great resource for local residents. However, it is also one of the most polluted rivers in the world, and a leading cause of this pollution is storm water runoff. Satellite remote sensing technology, which has been used successfully to study surface water quality parameters for many years, could be very useful to study and monitor the quality of water in the Tennessee River. This study developed a numerical turbidity estimation model for the Tennessee River and its tributaries in Southeast Tennessee using Landsat 8 satellite imagery coupled with near real-time in situ measurements. The obtained results suggest that a nonlinear regression-based numerical model can be developed using Band 4 (red) surface reflectance values of the Landsat 8 OLI sensor to estimate turbidity in these water bodies with the potential of high accuracy. The accuracy assessment of the estimated turbidity achieved a coefficient of determination (R2) value and root mean square error (RMSE) as high as 0.97 and 1.41 NTU, respectively. The model was also tested on imagery acquired on a different date to assess its potential for routine remote estimation of turbidity and produced encouraging results with R2 value of 0.94 and relatively high RMSE.Keywords: remote sensing; GIS; Tennessee River; Landsat 8 Operational Land Imager (OLI); water quality; turbidity; spaceborne sensor; environmental indicators; satellite imagery; Chattanooga
Abstract:Wildfire plays an important role in ecosystem dynamics, land management, and global processes. Understanding the dynamics associated with wildfire, such as risks, spatial distribution, and effects is important for developing a clear understanding of its ecological influences. Remote sensing technologies provide a means to study fire ecology at multiple scales using an efficient and quantitative method. This paper provides a broad review of the applications of remote sensing techniques in fire ecology. Remote sensing applications related to fire risk mapping, fuel mapping, active fire detection, burned area estimates, burn severity assessment, and post-fire vegetation recovery monitoring are discussed. Emphasis is given to the roles of multispectral sensors, lidar, and emerging UAS technologies in mapping, analyzing, and monitoring various environmental properties related to fire activity. Examples of current and past research are provided, and future research trends are discussed. In general, remote sensing technologies provide a low-cost, multi-temporal means for conducting local, regional, and global-scale fire ecology research, and current research is rapidly evolving with the introduction of new technologies and techniques which are increasing accuracy and efficiency. Future research is anticipated to continue to build upon emerging technologies, improve current methods, and integrate novel approaches to analysis and classification.Keywords: review; fire ecology; multispectral sensors; lidar; UAS
remote sensing of environment jensen pdf free
Download File: https://tinourl.com/2vBDRt
Future U.S. Workforce for Geospatial Intelligence assesses the supply of expertise in 10 geospatial intelligence (GEOINT) fields, including 5 traditional areas (geodesy and geophysics, photogrammetry, remote sensing, cartographic science, and geographic information systems and geospatial analysis) and 5 emerging areas that could improve geospatial intelligence (GEOINT fusion, crowdsourcing, human geography, visual analytics, and forecasting). The report also identifies gaps in expertise relative to NGA's needs and suggests ways to ensure an adequate supply of geospatial intelligence expertise over the next 20 years.
In current usage, the term remote sensing generally refers to the use of satellite- or aircraft-based sensor technologies to detect and classify objects on Earth. It includes the surface and the atmosphere and oceans, based on propagated signals (e.g. electromagnetic radiation). It may be split into "active" remote sensing (when a signal is emitted by a satellite or aircraft to the object and its reflection detected by the sensor) and "passive" remote sensing (when the reflection of sunlight is detected by the sensor).[1][2][3][4]
Remote sensing can be divided into two types of methods: Passive remote sensing and Active remote sensing. Passive sensors gather radiation that is emitted or reflected by the object or surrounding areas. Reflected sunlight is the most common source of radiation measured by passive sensors. Examples of passive remote sensors include film photography, infrared, charge-coupled devices, and radiometers. Active collection, on the other hand, emits energy in order to scan objects and areas whereupon a sensor then detects and measures the radiation that is reflected or backscattered from the target. RADAR and LiDAR are examples of active remote sensing where the time delay between emission and return is measured, establishing the location, speed and direction of an object.
The basis for multispectral collection and analysis is that of examined areas or objects that reflect or emit radiation that stand out from surrounding areas. For a summary of major remote sensing satellite systems see the overview table.
In order to create sensor-based maps, most remote sensing systems expect to extrapolate sensor data in relation to a reference point including distances between known points on the ground. This depends on the type of sensor used. For example, in conventional photographs, distances are accurate in the center of the image, with the distortion of measurements increasing the farther you get from the center. Another factor is that of the platen against which the film is pressed can cause severe errors when photographs are used to measure ground distances. The step in which this problem is resolved is called georeferencing and involves computer-aided matching of points in the image (typically 30 or more points per image) which is extrapolated with the use of an established benchmark, "warping" the image to produce accurate spatial data. As of the early 1990s, most satellite images are sold fully georeferenced.
Object-Based Image Analysis (OBIA) is a sub-discipline of GIScience devoted to partitioning remote sensing (RS) imagery into meaningful image-objects, and assessing their characteristics through spatial, spectral and temporal scale.
Old data from remote sensing is often valuable because it may provide the only long-term data for a large extent of geography. At the same time, the data is often complex to interpret, and bulky to store. Modern systems tend to store the data digitally, often with lossless compression. The difficulty with this approach is that the data is fragile, the format may be archaic, and the data may be easy to falsify. One of the best systems for archiving data series is as computer-generated machine-readable ultrafiche, usually in typefonts such as OCR-B, or as digitized half-tone images. Ultrafiches survive well in standard libraries, with lifetimes of several centuries. They can be created, copied, filed and retrieved by automated systems. They are about as compact as archival magnetic media, and yet can be read by human beings with minimal, standardized equipment.
Generally speaking, remote sensing works on the principle of the inverse problem: while the object or phenomenon of interest (the state) may not be directly measured, there exists some other variable that can be detected and measured (the observation) which may be related to the object of interest through a calculation. The common analogy given to describe this is trying to determine the type of animal from its footprints. For example, while it is impossible to directly measure temperatures in the upper atmosphere, it is possible to measure the spectral emissions from a known chemical species (such as carbon dioxide) in that region. The frequency of the emissions may then be related via thermodynamics to the temperature in that region.
The modern discipline of remote sensing arose with the development of flight. The balloonist G. Tournachon (alias Nadar) made photographs of Paris from his balloon in 1858.[27] Messenger pigeons, kites, rockets and unmanned balloons were also used for early images. With the exception of balloons, these first, individual images were not particularly useful for map making or for scientific purposes.
The development of artificial satellites in the latter half of the 20th century allowed remote sensing to progress to a global scale as of the end of the Cold War.[31] Instrumentation aboard various Earth observing and weather satellites such as Landsat, the Nimbus and more recent missions such as RADARSAT and UARS provided global measurements of various data for civil, research, and military purposes. Space probes to other planets have also provided the opportunity to conduct remote sensing studies in extraterrestrial environments, synthetic aperture radar aboard the Magellan spacecraft provided detailed topographic maps of Venus, while instruments aboard SOHO allowed studies to be performed on the Sun and the solar wind, just to name a few examples.[32][33]
Remote sensing data are processed and analyzed with computer software, known as a remote sensing application. A large number of proprietary and open source applications exist to process remote sensing data. Remote sensing software packages include:
According to an NOAA Sponsored Research by Global Marketing Insights, Inc. the most used applications among Asian academic groups involved in remote sensing are as follows: ERDAS 36% (ERDAS IMAGINE 25% & ERMapper 11%); ESRI 30%; ITT Visual Information Solutions ENVI 17%; MapInfo 17%.
In education, those that want to go beyond simply looking at satellite images print-outs either use general remote sensing software (e.g. QGIS), Google Earth, StoryMaps or a software/ web-app developed specifically for education (e.g. desktop: LeoWorks, online: BLIF). 2ff7e9595c
Comments