Making the Grade with SSL

Disclaimer: These are the steps that I followed. Please do due diligence and investigate fully before you attempt to modify your own server. Your mileage may vary…

I have a number of websites that run on Windows servers running Internet Information Server (IIS). One of the requirements I pretty much insist on is that if a site allows you to log in, it has to have an encrypted means of communications. We do that by generating a certificate request inside of IIS Manger and sending the certificate signing request (CSR) off to be signed by a certificate provider like GlobalSign, Digicert, etc. The certificate provider with sign the certificate and send you back a blob of text that you can save in a text file with a *.cer extension at the end. You then open up IIS Manager, select the server and Complete the CSR, which installs the certificate on the server. You can then edit the bindings for the website that you want to enable SSL on, add an HTTPS binding and select the certificate.

Easy peasy…you’re done, right? Unfortunately, not quite.

All kinds of security buzz these days about SSL work-arounds and tricks to reduce the security that they provide, funky names like the BEAST attack, POODLE, FREAK, etc. So we want to make sure that the ciphers and encryption techniques that we use are as safe as possible. There are tools available on the web that will hammer your SSL implementation and tell you if there might be any weaknesses. One such online tool is the Qualys SSL Labs test – available at:

I ran the SSL Labs scan on a Windows Server 2008 R2 box running IIS 7.5 that I’d just installed a certificate on. The results were not very good with the out-of-the-box settings – an “F” (see below)

SSL Labs Initial Scan

SSL Labs Initial Scan

The report gives us some feedback on what they think the deficiencies are in your site’s SSL configuration and some links to some more info. In the case of this Windows 2008 R2 server, it’s identified:

  • SSL 2 is supported – it’s old, it’s creaky, and it’s not to be trusted
  • SSL 3 is supported – (see above) and it’s vulnerable to POODLE attack (oh noes – not poodles!)
  • TLS 1.2 isn’t supported – TLS 1.1 isn’t either, but they leave that out, we’ll fix that too
  • Some of the SSL cipher suites advertised by the server as supported are either considered weak, or they don’t support Perfect Forward Secrecy

The first three items we can fix by editing the registry, the last item requires us to modify one of the group policy settings. The standard disclaimers apply – don’t make any changes to your system unless you are a highly trained professional who understands that these changes may cause your system to no-worky and make sure you have a full backup of the system so that you can restore it if things go sideways.

To disable SSL 2.0 & 3.0 and to enable TLS 1.1 & 1.2, I had to run Regedit.exe and go to:


You’ll probably only see one key under Protocols – SSL 2.0 and in my case it only had a Clients key.

SSL 2.0 Initial Values

SSL 2.0 Initial Values

I created a Server key under SSL 2.0 and added a DWord name of “DisabledByDefault” with a data value of “1”. Now the server won’t attempt SSL 2.0 connections.

Disable Server SSL 2.0

Disable Server SSL 2.0 and 3.0

To disable SSL 3.0 create a similar SSL 3.0 key under Protocols, create a key called Server under it and add a Dword with name DisabledByDefault with data value “1” there as well. No more SSL 3.0 served up now.

To enable TLS 1.1 and 1.2, you follow similar steps of creating TLS 1.1 and TLS 1.2 keys under Protocols and creating a Server key under each. This time, however, I added two Dword values under each of their Server keys. One named DisabledByDefault with a data value of “0” (we don’t want them to be disabled) and then add a second DWord named “Enabled” with a data value of “1” (the default is “0”, so you’ll need to change the value to “1” once you create the Dword entry).

Keys to enable TLS 1.1 and 1.2

Keys to enable TLS 1.1 and 1.2

I closed Regedit – no need to “save” as it auto-saved for me.

Next we need to edit the group policy setting that determines which SSL cipher suites that the server will offer up. To edit the group policy on my stand alone server clicked Start -> Run and typed “gpedit.msc” to open the Windows group policy editor snap-in. The entry we want to modify is under:

Computer Configuration -> Administrative Templates -> Network -> SSL Configuration Settings

The entry we want to modify is “SSL Cipher Suite Order” which was “Not Configured” by default. This means that it falls back to the Windows Server default ciphers and ordering.

SSL Cipher Suite Order Default State

SSL Cipher Suite Order Default State

To only serve up ciphers that aren’t weak and that support Perfect Forward Secrecy, I had to choose a subset of ciphers. Luckily, Steve Gibson at GRC shared out a list of ciphers that met that criteria on his site at:

One caveat is that the list that you paste into the group policy editor has to be a single line of comma separated values. No carriage returns or the like. I copied the text from Steve’s site into Notepad and then hit Home + Backspace for each line starting at the bottom until I got a single line of comma-separated values.

Cipher suites in a single line

Cipher suites in a single line

Click the “Enabled” radio button, highlight the default values in the SSL Cipher Suites textbox and delete them, paste in the new values from Notepad (remember, the single line, no line breaks rule), click Apply and Save and we’re done.

SSL Cipher Suite Order Enabled

SSL Cipher Suite Order Enabled

I then closed the group policy editor MMC snap-in, rebooted the server (it won’t take until you reboot) and then went back and re-ran the Qualys SSL Labs test by clicking the “Clear Cache” link. It caches the results from the previous scan, so unless you click the link, you’ll just be looking at the previous scan results.

Qualys SSL Labs A Grade

Qualys SSL Labs A Grade

Voila! We’ve gone from an “F” grade to an “A” grade. Whether the site is actually more secure or not is beyond the scope of this blog post, but if I am being asked to serve up an SSL secure site and it gets an “F” there would be some ‘splainin’ to do.

Hopefully this helps with understanding what steps were required for me to get the “A” grade.

Where Have All The Research Vessels Gone?

That’s the question that we hope to answer with the International Research Ships Schedules & Information project. Imagine with me if you will a site where we can log metadata about the research adventures of all oceanographic research vessels. Think of the opportunities that it would open up for researchers wanting to know who has been exploring in a specific region of the ocean, what they were looking for and, if we can get the metadata exposed, what data they collected with possible links to where it can be discovered. I’m proposing that we continue to develop the pot where all of that ship information and cruise metadata can be cooked, blended together with just the right seasonings (algorithms), and develop the scoops (tools) that would help users, agencies and researchers pull out a portion that suits their needs. I like to call the concept Stone Soup Science. I love the Stone Soup story and think the concept that it conveys in this data context is a perfect fit. (It’s much better than my other depictions – “Show Me The Data!” or <cue AC/DC music>”Dirty Data…Done Dirt Cheap”  ;?)

Stone Soup Science graphic

Stone Soup Science

No, I am not proposing that we build a data warehousing site to hold all of the oceanographic data that’s being collected. There are agencies and organizations all of the world that are doing a great job of doing that already. What I am proposing is that we continue the modernization of the Research Vessels site to help users mine for and discover where RVs have operated in the past. The next step would be to expose links to those data warehouses. I certainly wouldn’t want to have to try to comb through the holdings of NOAA, R2R, IFREMER, CSIRO, etc to find out who has been doing what oceanographic research, where they went and when they were there. I think this is a better one-stop RV shop solution. All research vessels would be added to the site, not just vessels over a certain size, not just vessels that belong to a certain agency, not just vessels that specialize in one facet of science. We can create customized views of those most certainly, but they’d be done by creating queries to the vessel database to return just those vessels that are of interest to the user or association.

A students Ocean Bytes article shows one of the benefits of being able to leverage and repurpose underway ships data. Eric Geiger pulled together the underway surface mapping data from four regional research vessels to create his satellite salinity modelling algorithm as part of his research thesis. It took Eric a significant amount of time to figure out what research vessels had been working in the region that he wanted to investigate, and even more time to be able to get access to the data that the ships had collected. Imagine if we were able to put together a set of online tools that facilitated that type of investigation. That’s part of what we hope to accomplish with the International Research Vessels Schedules & Information site.

Rather than regurgitate the information on the history and future modernization thoughts on International RV project, I’ll refer you to the About Ships page, which does a pretty good job of explaining things.

International RV Tracks 2002-2011

International RV Tracks 2002-2011

The visualization above was a plot that I made of a subset of research vessel cruise tracks from 2002-2011. In blue are the vessels designed as US RVs and in red are the non-US vessels. I think it’s quite intriguing to be able to visualize the locations where we are conducting research, transiting to the location where we plan to research (sensors still collecting underway data) and, sometimes more telling, the gaps in coverage where nobody seems to be going. Many thanks to for the data dump.

RV Hugh R Sharp Ship Track

RV Hugh R Sharp Ship Track

A couple of years back, we helped the RV Hugh R Sharp set up a ship-to-shore data transfer mechanism up using their newly acquired fleet broadband. Every hour or so, the transfer scripts zip up the ships underway data files and transfers the data ashore. It dawned on us that we could peek inside the data archive sent ashore, parse out a subset of the ships underway data in 10 minute intervals, and display the ship track and its underway data in near real time. The RV Sharp Ship Tracker site was born out of this effort (see screenshot above). We’d like to prototype a more user customizable open source version of this code and allow others to use the code for their own ships. This data feed could then be pulled into the Research Vessels site to show a higher resolution ship track for the RVs that participate as well as exposing the ships underway data for possible re-use by other students and researchers. For those institutions that already have a ship tracking application in place, we could develop services that would allow for harvesting and repurposing those data as well. The thought would then be to expose all of the ship information, schedule and track metadata via a web api that would allow others to use and repurpose the data as well. Whether it be on their own sites, displaying just a subset of the ships that they are interested in, or for use on mobile apps. Open data!

No Funding

I’ve been involved in the development and operation of the International Research Vessels Schedules & Information project since around 1998 or so. The project was previously funded by a collection of sponsors including NSF, NOAA, ONR, NAVO and the USCG, with each of them contributing funds towards the operation of the project until around 2005. Budget cuts at the time plus the reluctance to share upcoming ship schedules post 911 by some agencies resulted in our program losing its funding. I put the project into hot-standby until funding could be obtained to resume project development and metadata collection. The site stayed online and new ship information was added as it was received, but no major reworks of the site underpinnings happened. I’ve done the dog-and-pony show showcasing the site and its potential to a few groups since then attempting to get funding to move forward and while nearly everybody seemed to agree that the project should be funded, no funding ever came.

Web technologies are advancing at breakneck speeds and it’s time to move this project forward. Funding or no funding. (It’s either that or start working on a Flappy Ships app, make millions, but not contribute towards science ;?)

I’m always open for more help and ideas to maximize the projects capabilities and potential, so if you’d like to lend a hand, do some research, contribute some code, or offer up some other resources (funding, software, training), please let me know by emailing me at The project needs re-architecting, the data tables need normalizing/denormalizing, the web design needs to be majorly spruced up, new GIS/mapping strategies and tools need to be figured out, ship data needs to be refreshed, web APIs need to be written, etc. Lots to do!

Help me help science!

Doug White

POV Sport25 Video Glasses for Fun & Research

How many times have you missed an opportunity to get some neat video to share because you didn’t have an extra hand to hold a video camera? Yeah, lots of times, me too! Well, no more!

The other day I ran across a special on for a set of POV Sport25 video glasses and I figured what the heck and pulled the trigger. They arrived this week and I took them for a test drive (literally).

POV Sport25 video glasses

POV Sport25 video glasses

They are a lot lighter than I expected and my mind is racing with all the video opportunities these bad boys will open up for us here at the college. The camera is right in between the lenses (see the tiny pinhole above). Imagine being able to film the first-person view of a grad student tagging a shark, or the deployment and recovery of underwater robots, or even <insert your scenario here>! They came with a glasses case, cleaner, USB cable and – for those doing work indoors – a set of pop-in clear lenses so that you’re not walking around in too dark of an environment.

To give you an idea of what the video quality was, I charged them up and wore them to the parking lot and on my drive downtown. The uploaded video follows. Enjoy!

Atlantic sturgeon arriving earlier in the mid-Atlantic

The unusually warm conditions in the winter and spring of 2012 have resulted in water temperatures up to 3°C warmer than the previous 3 years resulting in comparable Atlantic sturgeon catches off the coast of Delaware occurring 3 weeks earlier than past sampling efforts.  During sampling events for Atlantic sturgeon we have also documented sand tiger sharks arriving off the coast of Delaware in late-March, a full month earlier than documented in previous seasons.

My research, conducted jointly with Dewayne Fox at Delaware State University and Matt Oliver at the University of Delaware, is focused on coastal movements and habitat use of adult Atlantic sturgeon during the marine phase of their life history.  By utilizing acoustic biotelemetry on both traditional fixed array platforms as well as developing mobile array platforms coupled with Mid-Atlantic Regional Association for Coastal Ocean Observing Systems (MARACOOS) I am going to model Atlantic sturgeon distributions in a dynamic coastal marine environment.  This research is particularly relevant given the recent protection of Atlantic sturgeon under the Endangered Species Act.  Determining factors influencing Atlantic sturgeon movements and distributions during their marine migrations will enable dynamic management strategies to reduce mortalities as well as impacts to commercial fisheries, dredging efforts, and vessel traffic.  In addition to allowing for dynamic management strategies the development of models for adult Atlantic sturgeon movements and distributions in relation to dynamic environmental conditions will illustrate how changing environmental conditions are going to impact this Endangered Species moving forward.

Graduate student Matt Breece with a recently telemetered female Atlantic sturgeon off the coast of Delaware

Utilizing New Tagging Technology to Characterize Sand Tiger Shark Habitat

Sand tiger sharks are large bottom dwelling sharks found in the coastal waters of the Eastern North Atlantic, and are known to frequent the Delaware Bay in the summer months. Sand tiger shark populations are currently in danger of over exploitation because they are slow growing and have extremely low birth rates. While we know that the sharks are found within the Delaware Bay during summer months, little is known about their movements during the rest of the year, or what oceanographic conditions limit their spatial extent. There is evidence that these sharks make large coastal migratory movements along the Eastern Seaboard. This makes habitat characterization difficult because the sharks travel throughout such a large area. It is important for managers to know the areas of intensive use by the sharks, and the species assemblages within those areas, in order to protect these apex predators.

Dr. Matthew Oliver and Danielle Haulsee with a sand tiger shark caught in the Delaware Bay.

Our project, a collaboration among Delaware State University’s Dewayne Fox and the University of Delaware’s Matthew Oliver and Danielle Haulsee, will document and characterize the movements of sand tiger sharks, their habitat preferences, and the community assemblages they encounter using new and innovative electronic tagging technology. Sand tiger shark movements will be recorded using passive telemetry in addition to pop-off satellite archival tags. We will also deploy a new type of tagging technology, which acts as a mobile receiver, and will record any encounters with other sharks, fish or other marine animals that have been tagged with acoustic tags. We will then use satellite and remotely sensed data resources from the Mid-Atlantic Regional Association for Coastal Ocean Observing Systems (MARACOOS) to characterize and model the habitats and oceanographic conditions used by sand tiger sharks. This study will give managers a better understanding of the spatiotemporal patterns in sand tiger shark movements along the East Coast, as well as inform management decisions regarding sand tiger shark habitat utilization.

GIS Consultant Takes a Trip to Lewes

I took a fun and enlightening trip to the UD Lewes campus on Thursday. As UD’s Lead Geospatial Information Consultant, I consult on Geographic Information Systems (GIS) for research and general use. I am gearing up my knowledge of wind energy and the atmosphere by taking the “Wind Power Meteorology” course taught by Dr. Cristina Archer of UD’s College of Earth, Ocean, and Environment. I have an interest in this topic and I’m getting more consulting requests involving data from this field.

As part of that class, we took a field trip down to UD’s Lewes campus.  It was a sunny, windy day … perfect for checking out the turbine!  I made the following video with video and photos I collected with my phone, to share some of that experience.

Predicting Sea Surface Salinity from Space

The simplest definition of salinity is how salty the ocean is. Easy enough, right? Why is this basic property of the ocean so important to oceanographers? Well, along with the temperature of the water, the salinity determines how dense it is. The density of the water factors into how it circulates and mixes…or doesn’t mix. Mixing distributes nutrients allowing phytoplankton (and the rest of the food web) to thrive. Globally, salinity affects ocean circulation and can help us understand the planet’s water cycle. Global ocean circulation distributes heat around the planet which affects the climate. Climate change is important to oceanographers; therefore, salinity is important to oceanographers.

Spring Salinity Climatology for the Chesapeake

Spring Salinity Climatology for the Chesapeake

Salinity doesn’t vary that much in the open ocean, but it has a wide range in the coastal ocean. The coast is where fresh water from rivers and salt water in the ocean mix. Measurements of salinity along the coast help us understand the complex mixing between fresh and salty water and how this affects the local biology, physics, and chemistry of the seawater. However, the scope of our measurements is very small. Salinity data is collected by instruments on ships, moorings, and more recently underwater vehicles such as gliders. While these measurements are trusted to be very accurate, their spatial and temporal resolution leaves much to be desired when compared to say daily sea surface temperature estimated from a satellite in space.

So, why can’t we just measure salinity from a satellite?Well, it’s not as simple, but it is possible. NASA’s Aquarius mission which was launched this past August is taking advantage of a set of three advanced radiometers that are sensitive to salinity (1.413 GHz; L-band) and a scatterometer that corrects for the ocean’s surface roughness. With this they plan on measuring global salinity with a relative accuracy of 0.2 psu and a resolution of 150 km. This will provide a tremendous amount of insight on global ocean circulation, the water cycle, and climate change. This is great new for understanding global salinity changes. What about coastal salinity? What if I wanted to know the salinity in the Chesapeake Bay? That’s much smaller than 150 km.

That’s where my project comes in. It involves NASA’s MODIS-Aqua satellite (conveniently already in orbit:, ocean color, and a basic understanding of the hydrography of the coastal Mid-Atlantic Ocean. Here’s how it works: we already know a few things about the color of the ocean, that is, the sunlight reflecting back from the ocean measured by the MODIS-Aqua satellite. We know enough that we can estimate the concentration of the photosynthetic pigment chlorophyll-a. So not only can we see temperature from space, but we can estimate chlorophyll-a concentrations too! Anyway, there are other things in the water that absorb light besides phytoplankton and alter the colors we measure from a satellite.

Spring Salinity Climatology for the Mid-Atlantic

Spring Salinity Climatology for the Mid-Atlantic

We group these other things into a category called colored dissolved organic material or CDOM. CDOM is non-living detritus in the water that either washes off from land or is generated biologically. It absorbs light in the ultraviolet and blue wavelengths, so it’s detectable from satellites. In coastal areas especially, its main source of production is runoff from land. So, CDOM originates from land and we can see a signal of it from satellites that measure color. What’s that have to do with salinity?

You may have already guessed it, but water from land is fresh. So, water in the coastal ocean that is high in CDOM should be fresher than surrounding low CDOM water. Now we have a basic understanding of the hydrography of the coastal Mid-Atlantic Ocean, how it relates to ocean color, and why we need the MODIS-Aqua satellite to measure it. So, I compiled a lot of salinity data from ships (over 2 million data points) in the Mid-Atlantic coastal region (Chesapeake, Delaware, and Hudson estuaries) and matched it with satellite data from the MODIS-Aqua satellite in space and time. Now I have a dataset that contains ocean color and salinity. Using a non-linear fitting technique, I produced an algorithm that can predict what the salinity of the water should be given a certain spectral reflectance. I made a few of these algorithms in the Mid-Atlantic, one specifically for the Chesapeake Bay. It has an error of ±1.72 psu and a resolution of 1 km. This isn’t too bad considering the range in salinity in the Chesapeake is from 0-35 psu, but of course there’s always room for improvement. Even so, this is an important first step for coastal remote sensing of salinity. An algorithm like this can be used to estimate salinity data on the same time and space scale as sea surface temperature. That’s pretty useful. The folks over at the NOAA coastwatch east coast node thought so too. They took my model for the Chesapeake Bay and are now producing experimental near-real time salinity images for the area. The images can be found here: They will test the algorithm to see if it is something they want to use

Climatologies of salinity for all of my models can be downloaded here:

I view this project as an overall support of the NASA Aquarius mission by providing high resolution coastal salinity estimates that are rooted in in situ observations. I hope this information proves to be useful for coastal ocean modeling and understanding the complex process that effect the important resource that is our coasts.

Demobilization and Remobilization of the Hugh R Sharp

Summer is an especially busy time for research vessels. The UNOLS fleet is making increasing use of containerized portable lab vans to shave some time and effort off of offloading the science party from one cruise and loading up the next mission and their gear. They also increase the flexibility of the research vessels by giving them the option to add additional science capabilities and facilities to vessel users. Options include adding:

  • Dry Labs
  • Wet Labs
  • Isotope Labs
  • Clean Labs
  • Cold Labs
  • Additional Berthing

This is a time lapse that we shot of the RV Hugh R Sharp returning from a multi-week scallop survey, unloading one lab van and then loading two more fresh ones before fueling up (both diesel and food) and departing on the next mission. Enjoy!

It’s all about the E-Lec-Tricity

We had a gentleman named Matthew Vest come to the GVis lab the other day to show off his do-it-yourself creation. He was looking for information on whether he might be able to showcase it at the upcoming Coast Day 2011 event that happens each year on the second Sunday of October here at the Hugh R Sharp campus in Lewes.

Matthew has done something that most of us dream about doing, something many of us say we’re going to do, and that same something that most of us never get off our duffs and actually do. He has taken a 1985 Chevy S-10 truck, removed the gasoline engine and tank, and replaced them with an electric drive motor (from a fork lift he says) and a bed full of 6 volt lead acid batteries (aka ‘golf cart batteries’ – 24 in total). The conversion took him about 2 years to complete and cost approximately $10,000 dollars but now he is the proud owner (and creator) of an all-electric vehicle that will to approximately 40-60 miles on a charge.

Matthew Vest' Electric Truck

Matthew Vest’ Electric Truck

Matthew went out of his way to select “Deka” batteries to power his creation, which he says are 100% recyclable. Each of the 24 batteries weighs in at 60 lbs, for a total battery weight of about 1450lbs. These batteries are wired in series to generate the 144 volts DC that power the Warp-9 electric motor that replaces the gas engine. There is one 12 volt battery which is used to power the stock lights, wipers and horn.

Chevy S10 - Batteries in the Back

Chevy S10 – Batteries in the Back

A Curtis 1231c controller is like the brains of the truck, controlling the power flow. A Zivan ‘smart charger’, which runs on standard 110v, sits behind the driver seat. When fully discharged, the batteries take about 10-12 hours to recharge. The only sound that the truck makes when it is running is the sound of the add-on vacuum pump that is also under the hood. It creates the vacuum that assists with the stock braking system of the truck.

Chevy S10 - Under the Hood

Chevy S10 – Under the Hood

Matthew is hoping to touch base with some of the researchers at UD that are involved in the V2G or “Vehicle To Grid” project so that he can assess whether his S10 can also be integrated with the power grid. For more information on V2G and GIEV’s (Grid Integrated Electric Vehicles) you can read more on the Q&A section on the V2G site.

We asked Matt how he got started with the project and he said it just took some research online, a couple of “how to retrofit your gas vehicle into an electric vehicle” books, and some very helpful people on a few of the EV forums. We salute Matt for what turned out to be an excellent EV refit and for his consideration of the environment when he selected the batteries and materials for his electric vehicle project. Well done!


Hurricane Katia Footprints

The ORB Lab was having a meeting in the GVis Lab this week and, as usual, the East Coast US 8-Day Averaged Sea Surface Temperature overlay was up on the screens. Dr. Oliver pointed to the screen and noted that there was a path cutting across the Gulf Stream that was cooler than usual and that it was probably due to upwelling and mixing from hurricane Katia. Sure enough, we loaded up a layer showing Katia’s track and they lined up.

Katia SST Trail

Katia SST Trail

We then checked to see if there was anything noticeable on the East Coast US 8-Day Average Chlorophyll layer and you can see what appears to be a slight bloom in chlorophyll along the track as well (slightly lighter blue).

Katia Cholorophyll Trail

Katia Cholorophyll Trail

Another neat view is the markedly cooler water that you flowing into the bays from the increased river discharge that resulted from the large amounts of rain dropped by hurricane Katia and tropical storm Lee as they passed through.

Cold river water 20110913

Cold river water 20110913

These layers and several others are processed and uploaded daily and made available via the Orb Lab website in the Public Access section. They are exposed via Google Maps interfaces as well as Google Earth embedded views and linkable KMZ file formats. Neat stuff!

NASATweetup Mission Accomplished

Welcome Home Flat Samantha!

Samanthas and Astronaut Greg Johnson

Samanthas and Astronaut Greg Johnson

Everything has finally come full circle and Flat Samantha is once again re-united with her creator Samantha. Calling @FlatSamantha‘s trip a “circle” might be a bit of a misnomer however as she has had a wild adventure over the last couple of months. Her journey started in April when young Samantha found out that I was selected to attend the #NASATweetup for the final launch of the space shuttle Endeavour (#STS134). Samantha (and all the rest of the students in the lab) were disappointed that they couldn’t come with me to watch this historic launch, and Samantha took matters (and scissors and markers) into her own two hands and created a flat adventurer that she named Flat Samantha. She asked me if Flat Samantha could ride with me to the Endeavour launch and go up in the shuttle to the International Space Station. I would have loved to say “yes” but I had to inform Samantha that time was too short and that I could only take her down to watch the shuttle launch, but that I would take lots of pictures of her during this adventure and let her share them via a Twitter account that was set up for her (after all, she was going down to a NASATweetup – how’s a girl to tweet if she doesn’t have an account ;?).

I emailed Stephanie Schierholz that I would like to bring along another #NASATweetup attendee and that she wouldn’t take up any extra space. Without batting an eye Ms. Schierholz said “no problem, I’ll have a #NASATweetup badge waiting for her as well”.

FlatSamantha STS134 NASATWeetup Badge

FlatSamantha STS134 NASATWeetup Badge

The original launch date for the shuttle was adjusted forward as there was a conflict between when the Endeavour would be at the ISS and when the Soyuz 25S capsule would be there with some time sensitive experiments. It just so happened that the new launch date fell during my sons spring break period at school, so we scheduled a family vacation to Orlando prior to the launch and had a blast sharing the road trip down and the theme park adventures with Flat Samantha prior to the new launch date. I took her over to the Kennedy Space Center for the #STS134 #NASATweetup where we enjoyed the many presentations that the fine people at NASA had arranged for us on day #1 and then came back for what ended up being a scrubbed launch on day #2 (see: “STS-134 NASATweetup is only half over“).

We sat in the tent waiting for the hundreds of thousands of other disappointed spectators that were parked outside the Kennedy Space Center to head home after the launch scrub, knowing that it would be a couple of hours at least before the roads would be passable. As we chatted amongst ourselves, I started talking with Beth Beck and she asked me about the back story on my flat companion. I told her about Samantha and how she would like to have seen Flat Samantha go into space and that I could only promise to get her to the NASATweetup event to watch the launch. Ms. Beck said that since the launch was scrubbed, that there might be a possibility to fulfill Samantha’s wishes and that she would get back to me. Sure enough, a few days later I got an email from her saying that one of the astronauts – Gregory Johnson (aka @Astro_Box) said that he would do what he could to get @FlatSamantha into space. True to his word, we received a picture from space of one @FlatSamantha in the cupola of the International Space Station.

Flat Samantha in the ISS Cupola (photo by Gregory Johnson)

Flat Samantha in the ISS Cupola (photo by Gregory Johnson)

Upon the Endeavour’s return, Flat Samantha was escorted to a couple of other NASA Tweetup events including the #NASATweetup for the Sophia Telescope, the @NASAJPL Tweetup by @Schierholz and even the historic landing of the space shuttle Atlantis #STS135 with @BethBeck. Being flat and portable makes it much easier to get invited to some pretty awesome events it seems.

The title of this post is “NASATweetup Mission Accomplished” because the journey home to creator Samantha was accomplished this past week. The journey home was not via a FedEx envelope or the like, however. Flat Samantha was escorted home and hand-delivered by none other than astronaut Gregory Johnson while he was on the east coast giving a mission debriefing to NASA empoyees at NASA HQ in DC. Samantha, her parents and myself were invited to attend the debriefing and to meet with @Astro_Box for some photos following the debriefing by the ever awesome Beth Beck. When the University of Delaware’s ORB Lab students (who were anxiously following @FlatSamantha’s adventure) found out about the trip, they asked if they could come too. I asked Ms. Beck whether that was possible and not only did she say “yes” but she provided the entire group with reserved up-front seating for the debrief!

NASA HQ Debrief

NASA HQ Debrief (photo by Beth Beck)

I want to give a heart-felt thank you to Stephanie Schierholz and Beth Beck for allowing us all to join @FlatSamantha in her whirlwind adventure, both via Twitter and in person. I would also like to thank Gregory Johnson for making not only one little girls wish come true by bringing her flat proxy into space, but for also taking time out of his incredibly busy schedule to bring that excitement to our small group of students and the rest of the world. The employees and representatives of NASA embody the compassion, the “can do” attitude and the educational and outreach expertise that the rest of us should pay close attention to. We are all honored to have been included in these adventures and their memories that we will carry with us for a lifetime. Rocket On NASA!

Group Photo with Greg Johnson and Flat Samantha (photo by Beth Beck)

Group Photo with Greg Johnson and Flat Samantha (photo by Beth Beck)

PS – All of the Flat Samantha #STS134 #NASATweetup adventure photos have been uploaded to the Flat Samantha Ocean Bytes media gallery – enjoy!

A Wind Turbine Experience

Luckily Blaise Sheridan is not afraid of heights, as he climbs up the UD 2-megawatt wind turbine for the second time. With his Master’s thesis revolving around wind energy, he is one of only four people from UD certified to climb the turbine. Although there is an elevator (more technically termed a personnel or ‘man’ lift), it can only be used by those who take a more intensive 4-day training course. Instead, a 2-day Fall Protection/Competent Climber class was taken by two facilities employees (Don Smith and Rodney McGee), as well as two UD students (Blaise and DeAnna Sewell). With this course under their belt, they can climb the ladder to the top of the 256-foot-tall-turbine. For their safety, they are always connected to a guide wire that clips onto the cable grab of each climbers harness. The cable ensures that if a climber falls they will only drop less than a foot.

 This goal of this trip was to string up 3 cables to install bat microphones. The microphones will allow researchers to see how often bats pass around the turbine. This anticipated one-day job ended up taking about 2.5 days due to lightning and the large amount of on site planning that needed to take place. With the help of a Gamesa contractor, Blaise and Rodney were able to install the research equipment while the contractor performed routine maintenance and provided his expert guidance.

The turbine is currently producing more electricity than projected, although how much more is still being studied. On average, it produces more energy than the university needs, which makes the excess available to the town!

Inside the nacelle, the bus sized structure on the top of the tower where all the interesting mechanical and electrical components are housed, Blaise notes,  “It must be at least 120 degrees” from the waste heat given off by the electrical transformers, not to mention all the gearboxes, friction and the fact that heat rises up the turbine. But, outside, on top of the nacelle, there’s enough airflow to cool you off! Blaise admits it can be very tiring to climb but the incredible view from the top is worth it. He discloses his favorite part is to watch the wake off the boats coming into Roosevelt Inlet. With the hope of additional renewable energy options in the future,  “It’s still very novel for a university to have this turbine and its been a once in a life time experience…one to check off the bucket list.  Not to mention it’s a great bar story.”

Timelapse of a Day in the ORB Lab’s GVis Room

I was showing the students how to operate the “birdcam” so they can use it to record a series of stills to create a time lapse video of an upcoming research cruise on the RV Hugh R Sharp. We left the birdcam in the corner and let it click away all day, shooting a new still every minute and the video above is the resulting masterpiece. It is embedded from “The UD ORB Lab” channel on YouTube.

You can learn more about the “birdcam” in a previous post about “Timelapse Video on the Cheap“. The GVis Room pictured above is the “Global Visualization Room” that was described in the post “How to Construct a Global Visualization Lab“.

Thanks to the ORB Lab crew for sharing!


Endeavour (STS-134) Launch Photos

This is a gallery of the launch photos that I took in the ~20 seconds that we had between ignition and the space shuttle Endeavour disappearing into the clouds. I’ll set up an outside gallery of all of the 300+ photos that I took in the coming week or so. I hope you enjoy them as much as I am.

DeepZoom of Endeavour on the Launch Pad

[ shut down, so my DeepZoom image is no longer available. I’ll re-create it soon…]

(The image above is dynamic and zoomable, play around with it some. Mouse over it and use your scroll wheel, click and drag around on the image, or click the plus and minus buttons, even go full screen with the button on the lower-right-hand corner – have fun with it!)

One of the challenges of taking photos of special events and places is that they always look so small and lacking in visual acuity and detail. You take a picture and then later, when you’re looking at it, you feel underwhelmed that it just doesn’t capture the clarity that you remember seeing.

Two technologies that I cobbled together to create the zoomable picture above of the Endeavour (STS-134) on the launch pad are Microsoft ICE (Image Composite Editor) and DeepZoom to tile and create javascript that allows you to zoom in and out of the image to enjoy much more detail. You can learn more about Microsoft ICE via this HD View blog posting, including details on what it can do as well as download links (it’s free!). I used my digital camera to zoom into the shuttle while it was on the launch pad post RSS shield retraction and took a matrix of photos, making sure that each photo overlapped with the others a little bit so that ICE could stitch them into one large hi-res photo. Since we’re limited in the number of pixels we can display on a screen, I leveraged DeepZoom technologies to break the image into a series of sub-images and to create javascript to swap in higher-resolution tiles as you zoom into the image. Similar to what you find when you zoom into a Google Map image or the like.

Microsoft had made it quite easy to automagically create DeepZoom images (based on SeaDragon technology) via their site. All I had to do was upload the composited image that I’d created using ICE to a web server, feed the URL of the large graphic image file and then copy the embed code from the results and paste them into this post after the file had been processed. The resulting javascript and tiles that were created are hosted on their site, so I didn’t even need to include them in my image file holdings.

I hope this helps in two ways:
A) Appreciate the awesome site that we were seeing at the STS-134 NASATweetup
B) You now know how to fish (ie: how to create cool visualizations like this). Have at it!

ps – If you want to pull down the full hi-res image that was used to create this so you can print out an awesome poster of the shuttle on the launch pad, you can get it here. Enjoy!

Endeavour Launch Photo Time Lapse

I took as many photos as I could during the Endeavour launch yesterday morning as fast as my camera would allow. Here is a time lapse of the photos taken before it disappeared into the clouds. I uploaded it to YouTube at 1080p, so make sure to go full-screen with it. Enjoy!

Update: Just found a link to a video that @AVWriter posted – crank up the subwoofer and enjoy the launch from the same vantage point that we had!

Endeavour RSS Shield Retraction Time Lapse

On Sunday we had the unique opportunity via the STS-134 #NASATweetup of being able to take pictures of the space shuttle Endeavour from about 600 yards away while the RSS shield was “retracted”. RSS stands for Rotating Service Structure and it is rotated away from the shuttle prior to fueling and subsequent launch. While I was busy snapping a gazillion pictures, I set up my el-cheapo digital video camera on the tripod and recorded the ~20 minute process. Below is a fast forwarded time lapse, squeezing the entire process into just over a minute. Enjoy!

Conch Reef Survey for NASA’s NEEMO 15 Project

Dr. Art Trembanis’ Coastal Sediments, Hydrodynamics & Engineering Lab (CSHEL) has been pretty busy lately. Not long ago I did a post about the prototype sub-bottom profiler section that he added to his Autonomous Underwater Vehicle (AUV) (see: Sub-Bottom Profiling using an AUV). I was down at the NASATweetup for the Endeavour (STS-134) launch not long ago and I got chatting with some folks from NASA’s Open Goverment Initative about the NEEMO 15 project (NEEMO stands for “NASA Extreme Environment Mission Operations“) and we discussed UD’s involvement.

It takes a village of roboticists to run a successful AUV campaign

It takes a village of roboticists to run a successful AUV campaign

When I emailed Dr. Trembanis upon my return to Delaware, he emailed me back with instructions to browse to UNCW’s Life Support Buoy live webcam above the Aquarius Reef Base. Sure enough, he was there aboard the RV George F. Bond monitoring his Gavia Scientific AUV as it acoustically mapped the Conch Reef around the Aquarius as a precursor robotic mission for NEEMO 15.

Go Pro Hero Attached to the AUV

Go Pro Hero Attached to the AUV

Here is video footage shot by an off-the-shelf HD Go Pro Hero digital video camera that was attached to the AUV:


The mapping mission ran for 4 days and covered approximately 100km, resulting in about 15Gigabytes of raw data. Here’s an overview map of the mission.

Aquarius NEEMO 15 precursor survey

Aquarius NEEMO 15 precursor survey

Many thanks to Dr. Trembanis for the video and imagery to go along with the story. Be sure to visit NASA’s NEEMO site to learn more about the mission and what’s to come. Visit the CSHEL site to learn more about the research that’s going on there and to see other cool video and image products that they’re producing.

STS-134 NASATweetup is only half over

I’m back from the Kennedy Space Center and the first half of the STS-134 NASATweetup. We got through most of the activities slated for Day #1 – which included meeting the ~149 other #NASATweetup attendees, a demo of the Extravehicular Mobility Unit (EMU) and Mark III spacesuits, and talks by Dana Hutcherson (flow director), Tara Ruttley (ISS associate program scientist) and astronaut Clay Anderson (@Astro_Clay). They really rolled out the red carpet for us!

@CPUGuru, @FlatSamantha and @Astro_Clay

@CPUGuru, @FlatSamantha and @Astro_Clay

The second half of the day involved visits to the Shuttle Landing Facility and the Mate-Demate Device (big honkin’ crane and assembly to lift the shuttle onto and off-of the 747 that carries it), the Vehicle Assembly Building (the large picture behind us in the picture above) – also known as the “world’s largest single story building” in which they work on and assemble the shuttle, booster rockets etc. The last part of Day #1 was supposed to be a site visit to the shuttle itself to watch the retraction of the Rotating Service Structure (or RSS) but a rather nasty storm front presented itself and all sorts of dark clouds, rain and lightning ensued.

The Lightning Storm

The Lightning Storm

Retraction of the RSS was delayed from its original 7:00pm time to much later in the evening, so we missed being able to get up close and personal with the shuttle. By the time we arrived for “Launch Day” the following morning, the RSS had already been retracted and the fuel tanks were being filled with liquid oxygen, so we were unable to get any closer than the press site almost 3 miles away.

On Day #2 we had a group picture taken by the countdown clock and talks by astronauts Ricky Arnold (STS-119 Discovery) and Leland Melvin (@Astro_Flow – now associate director for Education at NASA). We also had a talk by Daire McCabe – a designer at Lego followed by a weather/launch update by Lt. Col. Patrick Barrett of the 45th Weather Squadron.

We all went out to the roadside in front of the Vehicle Assembly Building (VAB) to watch the caravan carrying the astronauts to Launch Pad 39A go by and wish them well, however the vans came, stopped, and turned back around (a first we’re told). Apparently a power coupling unit was not functioning on the shuttle and they scrubbed the launch. We were all a tad disappointed, but I heard a good quote along the lines of “it’s better to be on the ground wishing you were in the air than to be in the air and wishing you were on the ground”.

Caravan Carrying the Astronauts

Caravan Carrying the Astronauts

The current status is that they are in the process of replacing the faulty power coupling unit and that the earliest possible launch date is May 10th. Both @FlatSamantha and I (@cpuguru) plan on heading back down to KSC as soon as they tell us a definitive launch date. We’ll be sure to take some awesome pictures and will keep you informed once the second half of this #NASATweetup resumes. For a good timeline of the adventures of @FlatSamantha, be sure to follow her on her Twitter page, where she’ll keep you informed and upload pictures of what’s going on right then. Until then, we’re on hot stand-by, our bags are packed and we’re anxiously awaiting the good news that the launch is a go.

Flat Samantha Is Coming to the STS-134 NASA Tweetup

Flat Samantha

Samantha and her friend Flat Samantha

Flat Samantha is in the house! I was contacted by young Samantha (pictured left – the non-flat one) to see if I had room for Flat Samantha to ride with us to the Kennedy Space Center in Florida when we embark for the NASA Tweetup at the Space Shuttle Endeavour (STS-134) launch on April 29th. Today we met up with her and her parents and got instructions on how to take care of Flat Samantha.

Samantha has provided meticulous training to Flat Samantha and has crafted a first-class space suit along with a helmet to help her breath in space should the opportunity present itself.

During our pre-flight briefing, I gave her the run-down on what the travel plans will be. I promise to take good care of our new travel companion and will post pictures at every major step in our journey. Thanks for entrusting us with your friend Samantha! We’ll be sure to take good care of her and will return her to you safely when this adventure is over.

Flat Samantha is following the footsteps of some of her other flat siblings, including the original “Flat Stanley” who visited twice: once in 2002 when he went into space and did a 14-day mission on the space shuttle Endeavour and again in 2011 when he visited NASA HQ courtesy of Beth Beck.  Other flat adventurers include “Flat Paxton” and “Flat David“, who also had the opportunity to visit NASA.

Flat Samantha will be tweeting about her adventures at the #NASATweetup – you can follow her tweets via @FlatSamantha as well as mine at @CPUGuru. Welcome to the adventure Flat Samantha!

NASATweetup for the Final Endeavour (STS-134) Launch

Ocean Bytes AstroTweeter @cpuguru

Ocean Bytes AstroTweeter @cpuguru

It’s official – I’m heading to Kennedy Space Center in sunny Florida for the Space Shuttle Endeavour (STS-134) launch as part of what they call a “#NASATweetup”. I follow @NASA via my personal Twitter account – @cpuguru – and when they announced that they were accepting applicants for the 150 spots that could gain back-stage access to the Space Shuttle Endeavour’s final launch, I beat feet over to the site and entered the contest. Apparently there were over 4,000 applicants for these openings from around the world. It blew my mind when I finally got the email from NASA saying that I was selected. I am deeply honored to be included in this auspicious event.

What is a “NASA Tweetup” you ask? Well, according to the NASA Tweetup page:

“A Tweetup is an informal meeting of people who use the social messaging medium Twitter. NASA Tweetups provide @NASA followers with the opportunity to go behind-the-scenes at NASA facilities and events and speak with scientists, engineers, astronauts and managers. NASA Tweetups range from two hours to two days in length and include a “meet and greet” session to allow participants to mingle with fellow Tweeps and the people behind NASA’s Twitter feeds.”

STS-134 Patch

STS-134 Patch

A list of the ~150 confirmed attendees of the #NASATweetup for space shuttle Endeavour’s launch can be found via the @NASATweetup/sts-134-launch list. A fellow attendee, @ChrisCardinal, has setup a comprehensive blog site that he’s using to post information pertinent to the launch and the STS-134 Tweetup at It’s been quite useful for tracking some of the behind-the-scenes information about the shuttle launch, as well as updates such as the delay of the launch from April 19 to a new (unless it changes again) April 29 launch date due to a overlap issue it would have with docking with the International Space Station by a Russian Progress supply vehicle. My understanding is that the delay came about because the Russian vehicle needed to be docked to the ISS during the same time frame as the Endeavour’s 14-day mission would have fallen. Apparently there are two docking ports on the ISS and the two vehicles could theoretically have been docked simultaneously, but I believe that process has not yet been fully vetted and approved yet so the safer alternative of delaying the shuttle launch was selected.

I’m taking you with me!

The bad news is that while I won the lottery to attend the NASA Tweetup, I am unable to physically take anybody else with me. The GOOD news is that doesn’t mean that you can’t come with me virtually. I’m brainstorming on what kinds of equipment I can pull together that would allow me to share as much of this experience with you as I can through the magic of modern portable electronics. I want to cobble together a high-def webcam and perhaps a tablet or laptop so that I can record (and maybe live stream) my adventure ala Hat Cam Guy (aka Joel Glickman). Since I don’t have an iPhone to hot-glue to my ball cap, I might have to rely on the generosity of others to help me pull this off. If you have some equipment and/or resources you’d like to donate to the cause please let me know by emailing me at

Q&A for NASA

If you look in the menu above, you’ll see that I added a page called “Q&A for NASA” so that school kids (and adults ;?) can post questions that they’d like me to try to get answers for while I’m down there. If you have a question that you’d like me to try and find an answer to, please feel free to add it in a “comment” to the page and I’ll do my best to get it answered while I’m down at the Kennedy Space Center.

Government Shutdown?

Now we are apparently going to be playing the “chase the launch date” game as we worry about the possible impact that a US Government shutdown would have on the launch due to the lack of a budget from Congress. I’ve been following the Twitter hashtag “#NASATweetup” and keeping a watchful eye on what the latest rumors are as to whether the mission will be delayed from its current April 29 launch date if funding isn’t allocated to keep governmental operations rolling. I’m crossing my fingers and hoping that Congress can get matters worked out.

A HUGE shout-out to Tammy!

I’ve been emailing back and forth with our awesome Marine Public Education Office team about this incredible opportunity to reach out and help educate and include kids in this adventure. I mentioned that it would be cool to include more of a space theme in the Ocean Bytes header image and in the time it took me to drive home I had the awesome header image that you see above in my inbox from our incredibly talented Tammy Beeson. Tammy ROCKS!

Sub-Bottom Profiling using an AUV

I was minding my own business, walking between Smith Lab and Cannon Lab buildings when what to my wandering eyes should appear but a reeeallly long stretched out Gavia Scientific AUV. My geek radar started going off and I just HAD to investigate exactly what was inside these newly milled sections of hull.

Gavia Scientific AUV with a recent addition

Gavia Scientific AUV with a recent addition

I invited myself into the lab and started asking some questions. It turns out that these new sections contain a prototype Teledyne Benthos Chirp III sub-bottom profiler that was specially designed to integrate with an AUV. Dr. Art Trembanis’ CShel lab and Val Schmidt from the University of New Hampshire’s Center for Coastal and Ocean Mapping were working with UTEC Survey Inc. to successfully integrate and test this new addition to the AUV’s sensor lineup. I cornered Nick Jarvies from UTEC and he gave me the run-down on the new addition (thanks Nick!):


Sample SBPWhat is a “sub-bottom profiler” you ask? Per the Wikipedia entry, it is a “powerful low frequency echo-sounder…developed for providing profiles of the upper layers” of the ocean floor. In the case of the Chirp III, probably in the range of 10-20kHz. Per Dr. Trembanis “Data is stored in an onboard Compact Flash card in an industry standard SEG-Y format.  The advantage of a chirp signal over a single frequency output is that through chirp demodulation of the returning signal one can get a better compromise between penetration and resolution.  The lower the frequency the greater the penetration but the less the resolution (and vice versa for high frequency) so a chirp signal which modulates from a low to high frequency provides penetration and resolution.  All of this depends to a great degree on the kind of bottom material one is trying to penetrate.”

Internal view of the Benthos Chirp III AUV SBP

Internal view of the Benthos Chirp III AUV SBP

The advantages of an AUV-based sub-bottom profiler (also per Art Trembanis) are:

  • We remove lots of water column data that would normally be unwanted and has to be removed/ignored from the record.
  • Because we can precisely follow the terrain near the bed or hold a constant depth well below the surface we can remove/diminish effects of waves that cause a ship to bob up and down.
  • We are able to do higher resolution characterization of the subsurface in greater water depths since otherwise from a surface ship you would have to use a lower frequency system to penetrate through the water column.
  • Because of the precise navigation of the AUV we can get very tight line spacing and precision following of features (i.e. pipeline routes) which allows us to provide better data more efficiently.

Thanks to everybody for taking time to talk on camera and for answering my questions!

Outdoor Webcam 101

Some time ago I was asked what would it take to get a live webcam feed of the osprey nest next to our Marine Operations Building. We have an osprey couple – Ricky & Lucy – and people love to check in on them throughout the summer months when they come home to Lewes.

Ricky & Lucy
Ricky & Lucy

I thought I’d share the software and hardware lineup that I selected to do the job and explain some of my choices. The equipment I ended up ordering was:

  • Sony SNC-RZ30N PTZ (pan + tilt + zoom) IP Webcam (~$1,100)
  • Dotworkz D2 Outdoor Enclosure with heater/blower (~$500)
  • Videolarm APM3 Pole Mount Bracket (PDF) (~$60)
  • WebcamXP network camera monitoring, streaming & recording software (~$99)
  • Some sort of intermediary computer to run the WebcamXP software
  • 100′ Outdoor Extension/Power Cord
  • 100′ Underground Double-shielded Cat5 Network Cable
  • Uninterruptable Power Supply (UPS)
  • Desiccant packs, Velcro Tape & a Plastic Container
  • RainX and Marine Silicone RTV

Why I picked what I did

We wanted to mount the webcam on an existing antenna tower next to the Marine Operations Building, thus the pole mount bracket. The webcam needed to have a short shopping list of features – including:

  • Stand-alone operation and a physical network jack for IP (network) streaming – we didn’t want to potentially impact building wifi performance and we just wanted to be able to run a network line up to it (no USB webcams need apply). Doing this with a network cable tied into our building switch meant that the packets the camera generated would not negatively impact the rest of the users.
  • It needed to have sufficient intelligence that we could remotely log into it to position it and/or program preset camera stops and zoom factors.
  • A healthy optical zoom so we could zoom up close on the nest for an up-close experience.
  • Image stabilization built-in so that when we did zoom in, any sway or vibration in the antenna pole wouldn’t give us a jittery image.
  • FTP/FTPS functionality – where you can have the webcam automagically FTP a still frame to an outside server at a user-defined interval. We used this feature to amass the still shots that we used in our (award winning) time lapse videos for the Lewes wind turbine construction. (We stopped the camera from moving for the 2-3 weeks it took to complete construction.)

We selected the Sony SNC-RZ30N for the job, but before you go out hunting for one, they seem to have been discontinued. In its stead now is the SNC-RZ25N (slightly lesser 18x optical zoom than the 30N) and its replacement, the SNC-RZ50N (26x optical zoom) which does both motion JPEG and H.264 streaming. The SNC-RZ30N camera has its own built-in web server so we could control it via a web browser. It has a 25x optical zoom so we can get up close and personal with the osprey nest. Don’t be fooled by some webcams which tout a zoom without specifying that it’s an optical zoom. If it doesn’t say “optical zoom” it’s most likely a digital zoom, meaning a lower resolution subset of the total number of pixels the camera can capture.

The SNC-RZ30N supports up to 16 presets, which allows you to position the camera where you want it pointed at the zoom factor you want, and to save a “preset”. You can then have the camera cycle itself through the various presets at a user-specified panning speed, stopping at each stop for a user-specified amount of time. Quite handy when we are cycling the webcam to look at various points of interest on campus, and even handier for removing the osprey nest preset from the mix when the ospreys head south for the winter.

Sony web interface

Sony web interface

As you can see, the webcam is pretty close to the ocean, so we needed to find an enclosure for it that could:

  • Survive a salty marine environment
  • Remain water-tight
  • Provide space for desiccant packs to remove any excess moisture so that the inside of the enclosure didn’t fog up on cold mornings
  • Provide an automatic heating of the enclosure on cold mornings to prevent frosting up of the outside dome
  • Provide power for the camera inside

We chose the Dotworkz D2 Enclosure with optional heater. It has two sealable penetrations that allowed us to get power to the unit by cutting off the end of a heavy-duty outdoor extension cord, tinning the tips and tightening up the screws onto it to power the power supply and heater inside (green circle). The other end of the extension cord is plugged into a UPS/Surge Protector in the radio room below since the webcam is strapped to a huge metal pole sticking up into the (sometimes lightning filled) sky. An underground double shielded network cable was run up the tower and inserted through the second penetration – after which we crimped a RJ45 end onto it and simply plugged it into the back of the webcam. The power supply came with an end that was already compatible with the power connector on the back of the webcam (orange circle) so powering the camera was a cinch.

Enclosure interior

Enclosure interior

The enclosure also came with a magic universal mounting bracket and stand-offs of various heights to ensure you can position a compatible webcam at the right height to see out the bottom dome.

Mounting plate stand offs

Mounting plate stand offs

Here is a picture of the connectors on the back of the webcam:

Sony back

Sony back

We attached the camera as high as we could and still reach it with a scissor lift that facilities owns. The first time we installed the enclosure, we had it mounted by the pole climbers that were doing maintenance on the antennas at the top of the tower (no way I could ever do that – waaayyy too high up). We relied on the seal built into the enclosure to handle sealing out the moisture, but unfortunately it slowly started amassing some moisture inside which started pooling up inside the dome over several months. Since we didn’t have the requisite climbing gear to climb the tower, we ended up having the tower climbers move the enclosure down the pole just high enough for us to reach with the scissor lift when they came back again. We didn’t take any chances this time. We went along the exterior seam and penetrations with some marine RTV (silicone sealant) and I used some velcro tape to secure a plastic container filled with desiccant packs on top of the black mounting plate to keep the inside as dry as possible. The velcro would keep any tower vibrations from storms and the like from working the desiccant packs over the edge and down onto the dome.

Webcam on tower

Webcam on tower

We made sure to loop some additional network and power cable up the tower just in case we needed to move the enclosure to a different height or to another side of the tower. Make sure to make a “drip loop” with any cables that dips down from the enclosure and then back up away from it. This keeps water from flowing down the cable and running against the penetration, thus minimizing the likelihood of water making its way into the housing. Remember that the cables are exposed to the elements, which includes ultraviolet radiation (UV from sunlight) which can break down most plastics and vinyl cable sheathing. We selected an extension cord which listed UV resistance and selected an outdoor network cable to stave off the UV damage to the cables.

Webcam on tower closeup

Webcam on tower closeup

One last treatment I did was to apply Rain-X to the outside of the dome. It’s like a wax coating for glass that makes water bead up and roll down the dome rather than stick to the outside and fog up your view.


One last topic I’d like to cover is the use of WebcamXP as a bridge between the webcam and the outside world. The problem with most webcams is that of security and scale. The internal web server in most webcams can handle about 25-50 simultaneous users. If you have more than that number, attempts to view the feed by users 51 and up will fail. To overcome this limitation, we purchased WebcamXP as an intermediary. The software installs on a desktop or server and it makes a connection to the webcam and handles the task of streaming it to the web server that you’re embedding the feed on. By acting as the intermediary, WebcamXP offloads the streaming load from the webcam. In our case we embed a Flash SWF file on the external webserver that gets its stream from WebcamXP.

The second issue that you run into with many webcams is that of security. They have some basic security built-in, but in order to stream the video from many of them, you have to expose the ability to control and position the webcam to the end-user. The last thing we wanted was for random users repositioning the webcam. Our solution was to give the webcam an internal IP address that was not accessible from outside our border routers. The system running WebcamXP was given a publicly accessible IP address and an internal IP address so that it could access the webcam video stream and serve it up externally.

Other nice features of the software are:

  • Watermarks – the software allows you to embed a watermark image over your video stream, thus branding your video with your logo and/or text.
  • Ability to expose the video stream via Java, Javascript or a Flash client.
  • Ability to handle multiple IP webcams simultaneously. If you want to grow the number of webcams you want to expose, you would only need one system running WebcamXP to stream the feeds from multiple webcams simultaneously.
  • A free version, which can handle a single webcam. This allows you to kick the tires and make sure the software does what you want before you buy. (Note, the free version does not allow watermarking your logo on the video stream)

I had initially looked into using either Silverlight and/or the IIS Streaming Server to handle this roll, but it was early in their development when we set the webcam up and it was more expedient to use WebcamXP. I’d still like to look into having our actual web server do the work of connecting to the internal webcam and handle streaming the content using Silverlight or some other non-Flash mechanism. If you have some feedback as to how to accomplish this, I’m all ears. I think it would make a much more flexible mechanism to handle the various browsers (including mobile, iPad etc.) that are coming online.

Thanks for enduring the long post and please feel free to comment if you can think of things I missed or have any suggestions on how to improve things.

Flat Stanley Rides a REMUS in Antarctica


Flat Stanley joined researchers at Palmer Station in Antarctica in search of penguins and environmental data about their feeding grounds in January.  This video showcases just how awesome this icon of international literacy and community can be. Armed with only a minimal amount of training, Flat Stanley managed to pilot a REMUS Autonomous Underwater Vehicle in a precision pattern through the frigid waters off the West Antarctic Peninsula  — gathering vital information that will allow scientists to understand the feeding habits of Antarctic penguin species.

You can see a map about the many locations this worldly traveler has gone and find out more about the Flat Stanley Project on their website. Many thanks to student travel coordinators at Sierra Canyon School in Chatsworth, CA for helping Flat Stanley make his way this far south.

Awesome job Stanley!

Trip to Penguin Colony on Biscoe Point

Folks seem to like penguins….so much so that we even made the front page of the University of Delaware website! Hurray! This shot is of the Adélie penguin colony on Humble Island. We had just deployed a satellite transmitter on one of the birds so we would know where to send the underwater robots (Gliders and REMUS’s).

University of Delaware Main Page

Remnants of the storm remain in the area and wind gusts are keeping the science boats at station today. Nevertheless, we did have a break in the clouds and the sun came out. The warm sun made the Gamage glacier very active and I happened to get a great video of calving. Right place, right time.


We headed out to Biscoe Point to deploy another satellite transmitter on a penguin. The plan was remove the transmitter from a Gentoo penguin which had been at Biscoe Point since mid-night. The challenge is to find the tagged bird amongst the rest! On the way, a large amount of brash ice had surrounded Biscoe Point, so we had 1-2km if slow travel through the ice. Marc Travers (our boat driver and expert birder) did an excellent job snaking in between the large chunks. Outboard motors and large chunks of brash ice don’t mix well. Hitting a large piece of ice can leave you on a boat with a busted motor. That is why we carry an extra motor in every boat.


When we arrived at Biscoe Pt. we found that an Elephant Seal had climbed into one of the Gentoo Penguin nesting areas. If the penguin chicks are too young or unguarded by its parents, they can be easily crushed by these massive seals.

Southern Elephant Seal in Gentoo Penguin Colony overshadowed by Mt. William

Luckily it looked like the Gentoo chicks were old enough to avoid it. Occasionally a Gentoo adult would peck at the Elephant Seal’s thick blubber, but the giant beast didn’t seem to be bothered by it at all.  We made our way around the Gentoo colony looking for our tagged bird. She happened to be perched right on a rock preening herself where we could see her plain as day. The birders quickly removed the tag and she went back to her nest.

Penguin Chick Eaten by Skua Birds

Elephant Seals aside, the biggest threat to the chicks are Skua’s. These are aggressive scavenger birds swoop down and grab chicks right from their nests and make a meal out of them. There was plenty of evidence at Biscoe Pt. that the Skua birds had been active here.

Still, even with the ever present Skua, there were plenty of Gentoo chicks that were starting to look more and more like their parents. They are are starting to get their adult feathers. Their feathers are not waterproof yet, but they will be soon.

Gentoo Chick with Parent at Biscoe Pt.

The next step was downloading the dive information from the tag. This data will help us understand how deep the penguins are feeding. The dive data will help us properly analyze the data coming from the underwater Gliders and REMUS vehicles. The Birders are able to download and ready the tag for its next deployment in just a few minutes with a laptop computer in the field. These are amazing little tags.


We walked around a small bay to the neighboring Adélie Penguin colony and were able to quickly identify an Adélie penguin that would be good for carrying our satellite

Adélie Penguin packed with a satellite transmitter.

transmitter. She was quickly tagged and released back to her nest. Her two grey puffy  chicks are just to her right. We will be watching the satellite data closely to find out where she is eating. Then, we will send our underwater robots to sample that section of ocean.  In a few days the Birders will head to Biscoe Pt. again to retrieve the tag, and thank her for her contribution to science.

Antarctic Storm Moves In

Our streak of excellent weather has officially come to an end with a large low pressure system in the Drake Passage.

Storm moves into Palmer Station

The weather was even tough tough for the ever-working “birders” who were going to deploy a few satellite tags on penguins today. REMUS missions are cancelled for the day. That might be good since one sprung a leak on a mission yesterday. Only the gliders are out….which makes gliders an awesome platform for ocean science when the weather gets a bit “snotty”. They don’t complain and don’t get sea-sick. The “Blue-Hen” continues is mission mapping the foraging locations of penguins when even the penguins are too scared to go out! That means I get to stay home and peel garlic (very necessary for all the amazing food here).

Garlic….it’s like the best thing you can eat when it is windy

Saturday is also the day we all clean the station and have a station meeting. I got to help clean the kitchen today. That was really nice because I totally miss cleaning the kitchen at home (no, not really). We also learned that hiking on the Gamage glacier behind the station is more restricted after a new crevasse opened up. Funny story about that…..Mark Moline found it by falling into the crack. He was fine, but it was a bit un-nerving. The GSAR (Glacial Search and Rescue) team changed the boundaries after they went and uncovered the full extent of “Mark’s Crack”.

The bad weather lets us do a bit of data analysis on where the penguins are foraging. The penguins seem to be keying off of the deep canyon off of Palmer station. This has been a working hypothesis from the “birders”

Finally, I’ll leave you with an awesome moon-rise over the Gamage Glacier. Pretty awesome sight.

Moonrise over Gamage Glacier

Penguins, AUV’s, Satellites: together at last

Adélie Penguin Rookery

Adélie Penguin Rookery on Humble Island

Satellite tagged Adelie Penguin

Satellite tagged Adelie Penguin

Penguin swimming tracks near Palmer Station

Penguin swimming tracks near Palmer Station

Ballasting the Glider (Blue Hen)

Ballasting the Glider (Blue Hen)

Is it possible to follow penguins from space to understand where and how they are feeding in Antarctica? Absolutely!..but not without an excellent team from University of Delaware, Rutgers University, Polar Oceans Research Group, and Cal Poly San Luis Obispo. The sequence starts with the “Birders”. The “Birders” are from Polar Ocean Research and they have been studying penguins in the West Antarctic Peninsula for years. The “Birders”, headed by Bill and Donna Fraser, head out to local rookeries to identify good penguins to tag with satellite transmitters. Finding the right breeding pair is key. The pair should have two chicks with both parents still around. Some chicks only have one parent, probably because one parent was killed by a Leopard Seal. We want to choose one of the parents, because we are pretty certain they will return to their chicks to feed them. This also helps in recovering the transmitter. If the bird does not return, the transmitter comes off during their natural annual molt cycle. Once a penguin is selected, it is gently fitted with a satellite transmitter. Special waterproof tape is used to connect the transmitter to the thick feathers on the back of the penguin. The penguins are remarkably calm during the process.  Once the tag is attached, the penguin is released back to its nest. The next part of the sequence is for the birds. The penguins head out to feed on krill and small fish in the area. Their tags relay their position information to ARGOS satellites and we get nightly updates. The Birders pass on their data to me nightly, and I filter and map the penguin tracks. I put them into Google Earth, so we can see where the penguins have been feeding. Then, through the magic of mathematics, we turn their tracks into predicted penguin densities. Based on these densities, we plan our AUV missions to intersect with the feeding penguins (Slocum Electric Gliders and REMUS AUV’s).  The first priority is to make sure the AUV’s are ballasted correctly. This means that they need to be trimmed with weights just right so they travel correctly under the water. We use small balances and scales to get the weight of the vehicle just right, then put them into ballasting tanks to make sure we did it correctly. The vehicles should hold steady just under the surface of the water.

Getting ready for the launch of the "Blue Hen"

Getting ready for the launch of the "Blue Hen" (M. Oliver and K. Coleman)

Once we have a planned mission, we head out in small zodiacs from the station to a pre-determined point. For the Gliders, we call mission control at Rutgers University (Dave, Chip, John) and let them know a glider will be in the water shortly. Once it is in, control of the glider is accomplished via satellite telephone directly to the glider. The glider calls in and reports data and position to mission control. We can see the data coming in live over the web, and in Google Earth as we navigate the vehicle to where the penguins are feeding. The gliders move by changing their ballast, which allows them to glide up and down in the water while their wings give them forward momentum. They “fly” about 0.5mph for weeks at a time!

Mark Moline with REMUS's

Mark Moline with REMUS's

In contrast to the Gliders, the REMUS vehicles are very fast and are designed for shorter, 1 day missions. Daily missions are planned around the penguin foraging locations. The Cal Poly Group (Mark Moline and Ian Robbins) have been launching 2 Remus Vehicles per day to map areas the gliders can’t get too. Like the gliders, these vehicles call back via iridium to let us know how they are doing in their mission.

MODIS Chlorophyll, Penguins, and Gliders

Glider Dances around Adélie Penguin Tracks in a sea of chlorophyll

Finally, we are getting satellite support from my lab at U.D. Erick, Megan and Danielle have been processing temperature and chlorophyll maps in near-real time to support our sampling efforts, as well as AUV operations up and down the West Antarctic Peninsula. Just today, we saw that the penguins in Avian Island (south by a few hundred miles) have been keying off of a chlorophyll front. RU05 was deployed by the L. M. Gould and will be recovered soon. All in all, it is a pretty awesome mission to track these penguins from space and AUV’s. We will see how the season develops!

Note: I will be uploading photos and videos to the ORB Lab Facebook page throughout my stay in Antarctica. Be sure to check there for my latest updates.

Penguins from Space

The West Antarctic Peninsula (WAP) is one of the most rapidly warming regions on Earth, with a 6°C temperature rise since 1950.  Glaciers are retreating and the duration and extent of sea ice has significantly decreased. Many species rely on the sea ice as a resting platform, breeding ground, protective barrier or have life histories linked to sea ice thaw and melt cycles. With the declines in sea ice, many species are having a difficult time surviving and adapting to the new warming conditions.

The food web along the WAP is short and allows energy to be transferred efficiently. Phytoplankton (tiny plants that capture energy from the sun) are ingested by zooplankton (such as krill) which are in turn eaten by penguins, seals and whales. Due to the rapid nature of the warming around Palmer Station and the short food chain, it is an ideal location to study the effects of the acute changes in a warming environment.

Palmer Station, Antarctica

In particular, Adélie penguins are experiencing significant population declines near Palmer Station, Antarctica.  On Anvers Island, populations have decreased by 70%. Declines in sea ice have also led to declines in the preferred food of Adélies.  Silverfish have nearly disappeared and krill have decreased by 80%. Currently, Adélies are having a difficult time finding a satisfying meal. In turn, many species are migrating southward to look for new places to live and better food resources. On the other hand, ice-avoiding species (Gentoo and Chinstrap penguins) have been able to move south into the Adélies home range.

Adélies are a prime vertebrate species to study in relation to a changing environment.  Tagging Adélies in summer breeding colonies with satellite-linked transmitters, allow foraging locations to be monitored. Their foraging tracks can be compared to satellite derived oceanic properties such as sea surface temperature, chlorophyll, sea-ice, and wind. Since conditions have changed so quickly over the last few decades, the recent development of satellites can easily detect these changes. The UD-134 Slocum Glider (underwater robot) will be deployed in January 2011 and 2012, to do additional surveys near breeding hotspots.  This will allow us to combine satellite data with high resolution in-situ glider data to predict how ideal foraging locations for Adélies may change as warming continues. This will also test the satellites ability to accurately describe ecological changes that are occurring along the WAP.

Adélie Penguin

The Palmer Long Term Ecological Research Program (PAL LTER) began in 1990, and investigates aspects of this polar environment while maintaining historical records for marine species.  Historical satellite data and species records will be useful in predicting phytoplankton, krill and penguin abundances and distributions.  Models will be used to predict future foraging locations of Adélies in PAL LTER region of the WAP. It is important to study this region because changes are happening faster than predicted and these changes can lead to dramatic effects in our lifetimes.

OSU Ships Underway Data System

One of the highlights of going to the RVTEC meeting is getting to hear about some of the cool projects that are underway at the various institutions. One talk that caught my attention was the SUDS system, an NSF sponsored project that was given by the techs at Oregon State University.

I talked David O’Gorman and Toby Martin into doing a quick rundown on their SUDS system on camera during one of the breaks. SUDS is an acronym for the Ships Underway Data System, which consists of software and two data acquisition boards that they designed in-house – one analog and one digital. Each board can be programmed with metadata about the sensors that are attached to them. When the boards are plugged into the ships network they broadcasting XML data packets which include both data and metadata about the data via UDP for a back-end data acquisition to capture and store. For redundancy, there can be multiple acquisition systems on the network as well I’m told.

The data acquisition cards can be either powered directly or via POE (Power Over Ethernet). They can also supply power to the sensor if needed. The digital cards can accept RS232 and RS485. The analog has 4 differential input channels which can do 0-5v on two of the channels and 0-15v on the other two and range from 600Hz to 20kHz input signals.

Their website has links to a PDF of the presentationthey did at the 2010 UNOLS RVTEC meeting as well as various examples of data packets that the system broadcasts. Definitely something that could be quite useful to handle the ever-changing data acquisition needs on today’s research vessels. I look forward to learning more about the SUDS system in the days to come.


RV HSBC Atlantic Explorer

RV HSBC Atlantic Explorer

Just got back from the 2010 UNOLS RVTEC meeting, which was held at the Bermuda Institute of Ocean Science (BIOS) – home of the RV HSBC Atlantic Explorer.

(Acronym Police: UNOLS = University-National Oceanographic Laboratory System and RVTEC = Research Vessel Technical Enhancement Committee).

For those unfamiliar with RVTEC, it is a committee organized around 1992 to “provide a forum for discussion among the technical support groups of the National Oceanographic Fleet” in order to “promote the scientific productivity of research programs that make use of research vessels and oceanographic facilities and to foster activities that enhance technical support for sea-going scientific programs” as listed in Annex V of the UNOLS charter. Membership is extended to UNOLS member institutions but “Participation shall be open to technical and scientific personnel at UNOLS and non-UNOLS organizations”.

The meeting agenda was pretty intense and we were pretty much straight out from Monday through Friday afternoon. There were a lot of scary smart people in the room doing some pretty amazing things in support of science operations at their respective institutions. I tried to compile a list of Tech Links on the site to make it easier to find some of the various resources that were discussed at the meeting. I did the same thing at last years RVTEC meeting in Seattle but some additions and corrections were needed based on feedback from the members. I’m hoping that I’ll be able to obtain funding to attend next years meeting and perhaps the upcoming Inmartech meeting (look for a post on Inmartech soon).

I shot some video, made some fantastic contacts and had some interesting discussions at this years RVTEC meeting. If all goes smoothly, I’ll have a couple of new blog entries online this week to help share some of the wealth of knowledge.

3DVista Panoramic Tour of the Sharp

I tinkered around with a demo copy of the 3DVista Stitcher and 3DVista Show 3.0 to push its capabilities a tad. I touched on the packages in a previous blog post about the Global Visualization Lab where I did a simple panorama of the room. The wheels started turning and we decided to push the envelope a little and create a series of panoramic views of the RV Hugh R Sharp as a proof of concept for an online virtual tour of a research vessel.

Panoramic Tour of the RV Hugh R Sharp

Click on this image to visit the proof-of-concept panorama…

The image above is a screen shot of the proof-of-concept panoramic tour we came up with. Click the image above or this hyperlink to visit the actual panoramic tour. The pane on the left shows an interactive panorama of the various points of interest on the ship. The right-hand pane shows a scan of the deck and compartment that the panorama represents. If there is no user action, the tour will cycle through a complete 360 view of each panorama and will move onto the next panorama in the list if nothing is clicked. There are two drop-d0wns to the right, one above the deck layout to select a specific panorama and one below it to select a specific panorama.

A really cool feature of the product is the ability to take the panorama full-screen for a more immersive experience. To do so, just click on the arrow button in the top-right-hand corner next to the question mark symbol. Once in full-screen mode, you can easily cycle through the various pano’s by mousing over them near the bottom of the screen.

The 3DVista Show software allows you to insert hot-spots into the panorama’s as well that can either link to other pages/sites or to include an audio clip into the mix. This makes it quite easy to include additional information about a specific area or feature. I inserted an animated arrow pointing to the Multibeam Operator Station on the Main Deck -> Multibeam Tech Area that links out to the Reson Seabat 8101 Multibeam Echosounder posting.

Multibeam Tech Pano

The mind races with the various uses for this type of technology. It allows for mobility impaired individuals and class groups to tour a space that they’d ordinarily be unable to access. It also allows scientists to “look around” and get a feel for the spaces that they’d be using when they come onboard a vessel. For a future project, I’d like to get support do some panorama’s both inside and outside of the various UNOLS lab vans that would allow scientists to virtually stand in the lab vans and walk around them to see how they’re laid out. 3D panorama’s of research sites in remote locations like the arctic and antarctic also come to mind as does tours of mineral sample and other collections with hotspots included for the various specimens for links to additional information. The application of this tech abounds.

I talked with the folks at 3DVista and it looks like they offer a 15% academic discount for the software so be sure to ask about if if you’re going to purchase it. They also list a one-shot 360 degree pano lens and adapters to make shooting the digital pics a little easier. We used a 180 degree fish-eye lens for our pano shots, which means we did 3 shots at each location 120 degrees off from one-another and stitched them together with the 3DVista Stitcher program.

Many thanks to Lisa Tossey for taking the photos and getting this project rolling. I posted this as an unpolished proof-of-concept version. I look for the ready-for-prime-time panorama that she comes up with for the CEOE site. I also look forward to seeing any cool panoramas that are out there for research projects. Be sure to share your links.

How to Construct a Global Visualization Lab

My apologies for how long it took to get this up. I promised our colleagues at the Xiamen University that I’d put up the complete specs for the Global Visualization room – a component of Dr. Matt Oliver’s ORB Lab and the pesky day job kept getting in the way.

Panorama Fish Eye LensI originally tried a video walk-through of the GVis Lab but it ended up being a lot of panning and zooming around, which I didn’t really care for. Instead I got to try out a fancy digital camera with a 180 degree fish-eye lens the other morning, which I used to shoot three shots of the room 120 degrees apart from each other. I used a software package called 3DVista Show to stitch the fish-eye pictures into a panorama image, which I uploaded to a free online hosted tour on their site. Once I got through uploading the image, the service provided an iFrame string that I included in the post to embed the panorama project. Be sure to click the full-screen icon (top right-hand arrow next to the question mark) to see the panorama a little better.

As you pan around the room, you’ll see the major components of the lab, which are:

The Dell Precision T7500 workstation was selected because it was one of the few systems that was capable of handling (2) PCIe x16 graphics cards simultaneously. We started with one graphics card with the expectation that it could handle the video workload, but wanted the option to add another graphics card in SLI mode to boost graphics performance. So far we haven’t needed a second video card as everything runs quite smooth under Windows 7 x64 as the base operating system and running Google Earth Professional.

The nVidia graphics card has two DVI outputs. One output is fed into the VWBox 133A video splitter, which spreads the 4300×2100 signal across the (9) monitors in the 3×3 monitor array. The VWBox also allows us to “subtract out” the bezel, which eliminates a few lines of video where the bezels are – making for no stepping in diagonal lines or graphics. The 460UX-2 Samsung monitors are all 1920×1080 (1080p) monitors with an 11mm bezel on all four sides. This is the smallest bezel monitor that was available when we built the wall. For Google Earth and other high-resolution work, the display is fantastic, however a second monitor was added to display lower-resolution content at a larger size and for Powerpoint presentations and the like. As small as the bezels are, they may cause some readability problems for text that happens to line up with them such as bulleted text on a Powerpoint slide and text in general. To eliminate this possibility a second large screen monitor was added so that this type of content can be dragged over to it. The second DVI output drives the 60” LCD display at the right-hand side of the room at 1920×1080 resolution. Windows treats the two monitors as one large virtual display, so content can be easily dragged from the large multi-screen display to the smaller 60” LCD and back.

We wanted the ability to present and control the system from anywhere in the room, so the RF Go Mouse and keyboard were selected. The RF dongle allows us to stay connected from up to 100’ away from the computer, which covers the entire lab and beyond. We tried other wireless keyboards and mice but they quickly lost their connection when they were 10-15 feet away. The 3DConnexion 3D Space Navigator makes it easy to manipulate the Google Earth application, but it is a USB device (no wireless equivalent available yet). To allow us to stretch the Space Navigator anywhere in the room, a USB extender was used to allow us to connect a Cat5 cable as an extension cord for the controller. The same extender was used to allow for placement of the Orbit cam on the opposite side of the room (next to the 60” LCD display).

The Orbit Cam is intriguing as it has a stepper motor in the base which allows the operator to turn it left and right. The auto-focus zoomable lens is able to be moved up and down as well. This allows the operator to pan and zoom anywhere in the room when we’re connected to another researcher or student via Skype or other teleconferencing software.

There is a photo below of me standing next to the multi-display wall with the CEOE website maximized on it. This shows the uber-high resolution of the display and some of the issues that just having it alone (no 2nd display) could cause. The first such monitor that we put in was an 82” Mitsubishi rear-projected LCD display. We ended up returning that display, even though it was larger, because it just wasn’t bright enough. It looked extremely dark when sitting next to the much brighter Samsung LCD display wall.



Video Wall Scale

Video Wall Scale

Monitor Wall Mounts

Monitor Wall Mounts

Sharp Aquos 60 inch LCD

Sharp Aquos 60 inch LCD

Logitech Orbit cam

Logitech Orbit cam

USB to Cat5

USB to Cat5

3D Space Navigator

3D Space Navigator

RF Keyboard and Mouse

RF Keyboard and Mouse

Pyle Amp

Pyle Amp

VWBox 133A

VWBox 133A

Dell Precision T7500

Dell Precision T7500



I continue to watch the professional display manufacturer sites for bezel-less LCD displays, which would be my only upgrade that I could imagine for the site. If you run across a 46”+ 1080p zero-bezel display, be sure to send me a link.

The Chief Fusion adjustable wall mounts were quite handy for making minor tweaks to the monitors. It seems that no matter how well you measure, you can never get the displays just perfect, so having the ability to micro-adjust them was quite handy. To allow us to lag-screw them to the wall pretty much anywhere (whether there is a stud or not) we lined the back wall with plywood across the entire wall span and then layered the front with drywall for a finished look. Later on, if we decide to increase the number of monitors into two 9-monitor display arrays, it would be easy enough to add another graphics card, 9 monitors and a second VWBox.

The big secret to turning the project from just a vision to an awe inspiring reality was our most excellent facilities guys and gals. Without their expertise and attention to detail the room could have turned out just ho-hum. They took our ramblings and descriptions of how we’d like things to look and made it come to life. Kudos to them for the room turning out as nice as it did.

Hopefully the information provided here will allow you to build-up your own visualization wall. If you have any questions or comments, please feel free to post them to the site.

Small & Mighty Mini-Top Barebones NetPC

What came in the box

MiniTop Contents

I thought I’d take a minute to share some info on the small and mighty Mini-Top barebones system from Jetway Computer. (Not to be confused with the Small & Mighty Danny Diaz ;?) This unit is basically the guts of a netbook but without the screen so I’ll call it a NetPC. We are thinking about introducing them into the computing site here at work and I was pretty impressed by its feature set and tiny size. Keep in mind that there are several models of ITX barebone systems to choose from over at Jetway. We opted to go with the model JBC600C99-52W-BW, which retails for about $270 at NewEgg. The “-BW” at the end means that it ships with a metal bracket (shown in front of the included remote in pic above) that will allow you to mount the unit to the VESA mounts on the back of most LCD monitors.

Minitop size photo

Smaller than my hand

Since the unit is so small (see pic to the right) this allows you to tuck it it out of the way quite easily behind a monitor. It also comes with an angled metal bracket that allows you to stand it up on its end and stick-on rubber feet in case you want to lay it on its side. Note that this is a “barebones” system, which means that it’s up to you to add the memory (up to 4Gigs of RAM), a single interior hard drive (2.5″ SATA) and a monitor to the mix. We purchased a 60Gig OCZ Agility 2 SSD (solid state drive) to the unit and a couple of Gigs of DDR-2 800/667 SODimm memory to the box (purchased separately).  The unit comes with a driver CD that has both Windows and Linux drivers on it, but since the unit doesn’t have an optical drive you’ll need to copy them to a thumb drive to use them. You’ll also need to figure out how to install an operating system on the unit as well. In our case, since we were installing Windows 7, we used the Windows 7 uSB/DVD Download Tool to take an ISO file version of our Windows 7 install DVD and create a bootable thumb drive with the Win7 install DVD contents on it. Installation was easy peasy.

Hardware specs are pretty impressive given its low cost and small size:

  • Intel Atom Dual-Core 525 CPU
  • nVidia ION2 Graphics Processor
  • DVI-I and HDMI 1.3 video outputs
  • Integrated Gigabit Ethernet & 802.11 b/g/n wifi
  • 12V DC 60W power input so it can be easily run off battery or ships power
  • Microphone and Headphone connectors
  • LCD VESA mount (-BW model only)
  • Jetway handheld remote control
  • USB 2.0 ports (5) and eSata connection

As I mentioned, we’re investigating using these as replacements for some of the computing site computers. We installed Windows 7 on the system and between the dual-core Atom processor and the SSD I can’t tell any difference between performance on this system and the Core-2 Duo desktops that are already in the site. Other possible uses include as a thin client, a kiosk PC, a set-top box for large wall mounted LCD displays and as a small low-power PC aboard ship or inside buoys or other deployed equipment. The unit has both DVI and HDMI outputs, so you can easily drive a small LCD or a huge flat-panel TV as long as they have those inputs (as most do). The nVidia ION-2 graphics system will supposedly drive a full 1080p HD display. I took some pics of the units interior (below) so you can have an idea of how the systems are laid out inside and out.

MiniTop Front Interior View

Front Interior View

MiniTop Rear Interior View

Rear Interior View

MiniTop Side Interior View

Side Interior View

These aren’t the only mini-PCs on the market. There are others like the Zotac ZBox and the Dell  Zino HD and I’m sure plenty of others. They’re just the model that we’re playing with here at the college. Exciting times ahead as these units ramp up in performance and drop down in size and power draw.

Time Lapse Video on the Cheap

The video above is a time lapse of a day in the life of the UD Wind Turbine in Lewes, Delaware.

We were quite excited when they told us that the UD Wind Turbine project was a go. As the time grew near for construction to start, we wanted to chronicle the construction progress and create a time lapse video. I did some research and looked into various webcams with weatherproof housings and the like, but sticker shock at the multi-thousand dollar price tags for the equipment, as well as the networking and power hassles to connect to it made me shy away from a complicated rig. I decided that the best way to go is the simple route.

The task really screamed for a lower cost, battery powered, weather-resistant camera that could be set to take a picture every X number of minutes. I finally narrowed the search down to Wingscapes Birdcam 2.0 outdoor camera. The camera retails for about $200 but I found it on Amazon for just over $150. It has lots of advanced features like motion sensing, light sensing, has a built-in flash plus lots of other nifty features. The main selling points for me were that it was designed for outside use (the turbine was being installed in Spring and it was rainy), it stored its images on an easily accessible secure digital card (up to 4Gigs), it had a user programmable time lapse mode, and it ran on four D-cell batteries for > 4 weeks worth of endurance.

As you can see from the time lapse video that MPEO created of the construction at the turbine base, the results were just what we were looking for (except for the big pile of dirt they put in front of the camera ;?). The video from afar was created using images FTP’d from a webcam located over at the Marine Operations Building. I’ll cover the configuration and components for that webcam setup in a later posting.

I can easily imagine many other uses for this kind device. Time lapse videos of coastal erosion, tide cycles, lab experiment time series, etc. In addition to the features cited above, the camera also has a video and a USB output on the side of the unit as well as an external power connector at the bottom for more lengthy time lapses. All-in-all, highly recommended.

Birdcam Cover Closed

Cover Closed

Birdcam with the cover open

Cover Open

Birdcam Side View

Side View

I used iMovie to create the movie at the top from all of the stills for this post, but I also just as easily created one using the freely downloadable Windows Live Movie Maker if you’re running Windows.

Video Tour of the Research Vessel Hugh R Sharp

RV Hugh R Sharp ready for launchWe recently had guests come down to take a tour of the Lewes campus and the Research Vessel Hugh R Sharp. One of the guests was wheelchair-bound and was limited to only seeing the main deck of the ship as getting to the rest of the ship would have required going up and down stairs. The Sharp has accommodations for handicapped scientists, but they are pretty much limited to the main deck. This limits their access to just the aft working deck, the wet and dry labs, the galley and the conference room. The wheels started turning during that tour on how to share the rest of the technological awesomeness of the Sharp with others. I decided to take my trusty $100 video camera in hand and record a video tour of the ship for those that are unable to navigate the stairs, and for classrooms and visitors who just can’t make the trek to Lewes for a tour. It’s a tad long, running just over 40 minutes or so, but it covers almost the entire ship. Enjoy!

Many thanks to Captain Jimmy Warrington for taking time to do a whirlwind tour just prior to a science mission – as you can tell from the video, he’s a natural at relaying information about the RV Hugh R Sharp and its science capabilities.

Detailed drawings showing deck layouts and profiles of the Sharp can be found the RV Hugh R Sharp landing page, which includes PDFs of:

To help you orient yourself a little bit as to the spaces that were covered, here are some deck diagrams to show the overview of a few of the spaces.


Aft Deck


Dry Lab


Wet Lab

Polar Orbiting Satellite Receiving Station

The video above is a quick screencast NASA JPL’s Eyes on the Earth application, which shows the tracks of various satellites orbiting the globe. It’s a really cool application that gives a top-notch overview of some of the satellites currently in orbit and their trajectories around the Earth. Take some time and poke around, you’ll be glad you did.

Polar Satellite RadomeThe reason I included it is that I promised to cover the polar orbiting satellite receiving station in a previous blog post about the new Satellite Receiving Station in Delaware. In the previous post I discussed the geostationary satellite receiving station. In this post, I hope to shed some light on the polar orbiting receiving setup.

What’s Inside the Radome

MODIS Satellite PassThe equipment for the polar orbiting satellite receiving station is a bit more involved than the pretty much non-moving geostationary setup. As the name implies, the polar orbiting satellites do just that, they orbit the Earth north and south, going from pole to pole. Their path is relatively simple, they just go around the earth in circles, but as they’re doing so, the Earth is rotating beneath them. The satellites point their cameras towards the earth and essentially capture a swath of data during each rotation. Since the Earth is rotating beneath them, the swath appears as a diagonal path if you look at the overlay.

Inside the RadomeIn order to capture data from a moving target, the dish has to be able to rotate and move in three axis in order to follow the satellite of interest. In order to protect the receiving equipment from the weather, it is typically installed in a circular fiberglass enclosure called a “radome”. To keep the design relatively simple, there is only one mounting configuration and radome setup created, and that’s designed to mount onboard a ship. It is then relatively simple to attach a mounting bracket to the top of a building and bolt the radome assemgly to it.

The video at the top of the page shows that there are several satellites in orbit, so the Terascan software has to pull down satellite ephemeral data from Celestrak each day, take into account the location of the tracking station, and generate a calculated schedule of which satellites will be visible to the satellite dish throughout the day. As there may be more than one satellite in view during any given time period, the satellite operator assigns a priority weighting to each satellite. The Terascan software then uses that weighting to decide which satellite it will aim the dish at and start capturing data.

Receiving Station Workstations

Acquisition and Processing SystemsInside the building is a rack of computers and receivers whose purpose in life is to control the dish on the roof of the building and to receive and process the data it relays down from the satellite. The receiving station at UD has both X and L-Band receivers which receive the data stream and pass it to a SeaSpace Satellite Acquisition Processor. The processor then sends the data packets to a Rapid Modis Processing System (RaMPS) which combines the granularized HDF data files from the satellites into a TeraScan Data File (TDF) file. Once in this format, various programs and algorithms can be run against the TDF file and channels of interest can be combined using NASA/NOAA and other user supplied algorithms to create the output product of interest. As the files can get rather large and there can be several of them coming in throughout the day, they are then moved over to a Networked Attached Storage (NAS) server and stored until they are needed.

Satellites Licensed

The UD receiving station is licensed and configured to receive data from the following satellites:

  • Aqua
  • Terra
  • NOAA 15
  • NOAA 17
  • NOAA 18
  • NOAA 19
  • MetOp-A (Europe)
  • FY-1D (China)

Hopefully this sheds a little more light on the polar orbiting receiving station and its capabilities. Let me know if there are any additions or corrections to the information I’ve posted.

Caley Ocean Systems CTD Handling System


One of the interesting innovations on the RV Hugh R Sharp is the incorporation of a “CTD Handling System” from Caley Ocean Systems. The video above was taken from the wet lab of a CTD Rosette being deployed and recovered using this system. If you search around on YouTube, you can find some interesting videos of crews deploying and recovering the CTD Rosette system. What you typically find is that you have one crane operator and then two or three crew members on deck with poles and/or ropes to try and guide the CTD back onto the deck. With the ship rocking and rolling out to sea, this can be a tad dangerous, especially when much of this work is done close to the waterline with waves splashing on deck.

The RV Hugh R Sharp has a CTD handling system that is pretty much designed to be operated by one marine technician, one of two currently in use in the UNOLS fleet (the other is on the RV Kilo Moana).

The marine technician on the Sharp is up on the bridge level and looks down through windows at the wet lab area and beside the ship. This allows them to control the deployment and the recovery of the CTD from a much safer location. The Caley CTD Handling System has motion compensation built in to cancel out the roll and pitch of the ship and is designed to mostly eliminate the swaying of the CTD system.  This makes for a much smoother and safer CTD deployment and recovery, which can occur quite often on many research vessels. The following pictures show the control station up on the bridge and an exterior view of the Caley CTD Handling System onboard the Sharp.

Caley Ocean Systems CTD Handling System - RV Hugh R SharpCTD Handling System Control Station - RV Hugh R SharpView From The Control Station

Next time I’m out on the Sharp, I’ll try to get a view of the system in action from outside the wet lab.

Is the system perfect? No, they still have some kinks to work out and with Caley located over in the UK, turn-around time can be pretty slow at times. The vessel operators are taking some lumps and trying to iron the kinks out of a system that can help make it a little safer to do routine underway CTD casts. Their efforts should be applauded.

Celebrate 10/10/10 Day

This weekend there will be a mystical confluence of mathematic synchronicity that hasn’t been seen in 100 years – that’s 102 years! On Sunday, Oct 10th the date will be 10/10/10. Geeks like me live for such interesting number sequences and I thought it would be neat to take pause and consider the mysterious power of 10’s. To kick things off, the fine folks at Eames Office have posted their classic “Powers of Ten” video online which stretches ones brain as we try to fathom the seemingly symplistic task of scaling our reality up or down by a factor of 10…

It boggles my mind whenever I watch this video. We throw around powers of ten all the time when we describe the world around us, like the volume of the ocean being 1.332×1021 liters or the size of picoplankton which is on the order of 0.2-2 x10-6 meters (or 0.2-2 micrometers). On the electronics side of the house, each jump forward in computing power and efficiency usually coincides with another jump down in trace-width size for integrated circuits and microprocessors – with some of the latest microprocessors being fabricated using 45 nanometer (45×10-9 manufacturing processes.

In recognition of 10/10/10 – the “Powers of Ten” website will be relaunched on Oct 10, 2010. They have a Google Maps link that shows the various events going on worldwide to celebrate this every-100-year event.

Some local events in the US include:

We deal with numbers either written in scientific notation or in engineering notation all the time – take time to share the video with your friends and kids. It will definitely fire up some brain activity trying to fathom the vastness of the universe and infinitesimal tininess of the atomic realm all in one short video clip.

For extra 10 goodness:

Happy 10/10/10 Day!

New Polar and Geosynchronous Satellite Receivers for Delaware

A few weeks ago they fired up a new satellite receiving station from SeaSpace at the University of Delaware’s main campus in Newark, DE. Two receivers were brought online, one for L-Band reception from Geosynchronous Satellites and one for X/L-band reception from Polar Orbiting Satellites. Both receiving systems have dishes that are mounted on the roof of Willard Hall as it presented the least obstructed view of the sky. The adds additional capability to an east coast satellite operations contingent which includes:

  • University of Maine
  • City College of New York
  • Rutgers University
  • University of Delaware
  • University of South Florida
  • Louisiana State University
  • Purdue University

For this blog posting, I’ll only cover the geosynchronous satellite capabilities. In a future posting I’ll cover the polar orbiting hardware and its capabilities.

Geosynchronous Satellite

UD Geosynchronous Satellite Dish

The beauty of geosynchronous satellites is the simplicity with which they can be tracked. Rather than flitting all about and requiring fancy calculations and equipment to track them, you merely point the dish to a point in the sky where the satellite remains fixed relative to the motion of the earth and pretty much lock the receiving dish down. Since the satellite is moving with a trajectory and speed that matches the rotation of the earth, the satellite is said to be “geo-stationary”.

The dish used to receive the signals from the geosynchronous satellites is therefore simple in its design. It is mounted with only one axis of movement, meaning it can only be adjusted along an arc of the sky either to the east or to the west. There is a motor and lead screw mounted on the back that will either push the dish one way, or pull the dish the other in order to position it for the best signal strength. The current intent of the UD dish seems to be dedicated to constantly receiving real-time data from the GOES-EAST satellite (also known as “GOES-13”). GOES East outputs full disk imagery of the the earth from a longitude of 75 degrees west, which gives a good view of pretty much all of North and South America and a good chunk of the Pacific and Atlantic Ocean.

GOES stands for “Geostationary Operational Environmental Satellite” and it is operated by NOAA’s NESDIS or “National Environmental Satellite, Data, and Information Service” primarily to support meteorological operations and research, which includes weather forecasting and storm tracking. The dish is oriented in such a way that it could also be programmed to point to GOES-WEST (aka GOES-11)  for a satellite view of the Pacific Ocean (centered around 135 degrees west longitude) as well if the need arises.

GOES East Full Disk Infrared GOES West Full Disk Infrared

GOES Sensors

One thing to bear in mind is that GOES-13 hasn’t always been “GOES East” – it took over for GOES-12 in April 2010, with GOES-12 moving to 60 degrees West to replace GOES-10 (decommissioned) for coverage of South America. I note this so that you don’t assume that the sensors (and/or their calibration factors)  for a particular GOES station are always the same.


The current GOES-East has optical imagers with 6 channels with resolutions of 1.1km for the visible channel (one); and 4km and 8km resolutions for the near infrared, water vapor and thermal infrared channels (two through six). The imager is basically a rotating mirror and lens configuration that scans the earth from north to south, line by line to receive reflected visible light, water vapor as well as infrared radiation channels. Each line scanned is digitized and transmitted back towards the earth with measurement units of percent albedo for visible light and temperature for the water vapor and infrared information. Spectral response functions can now also be downloaded online from the NOAA Office of Satellite Operations as well as other GOES calibration information.


GOES satellites are also equipped with a sounder with 8km resolution. The sounder scans the atmosphere over the land and ocean and provides vertical profiles which include the temperature of the surface and cloud tops as well as derived wind velocities from these measurements.

Real-time Access to Data

The key feature to having a satellite receiving station on-site is the access to the raw, real-time satellite data. Sure, you can get pull some images down from the NOAA Geostationary Satellite Server, but they would be just derived images. Scientists here at UD and elsewhere are interested in getting the latest raw data feeds from the satellites so that they can research and develop algorithms that process the raw channel data into other products in support of their research projects.

Next on my agenda is to try to give some insight into the polar orbiting satellite tracking station and the fancy gear that sits inside the radome enclosure. Cheers!

My IT is Greener than Your IT (or Server Virtualization FTW)

Carbon Carbon Everywhere

Carbon footprint, carbon emissions, carbon taxes…carbon carbon carbon. That’s all we’re hearing these days. If we do something that implies that we’re using less carbon then voila! We’re suddenly “Going Green”. As a carbon-based life form, I’m quite fond of carbon personally, but the story today is about how to minimize the amount of carbon that we’re responsible for having spewed into the atmosphere and taken up by the oceans. So the thing you need to do to eliminate your carbon footprint as well as the footprint of your neighbors and their neighbors is install a 2 Megawatt Wind Turbine. Problem solved…you are absolved of your carbon sins and you may go in peace.


What’s that you say? You don’t have a 2MW wind turbine in this years budget? Then it’s on to Plan B…well Plan A in my case as I started down this road years ago. Long before we installed the turbine. Even though the end result is a much greener IT infrastructure, that plan was originally geared towards gaining more system flexibility, efficiency and capabilities in our server infrastructure.  I’d be lying if I said I started out doing it to “be green”, even though that was an outcome from the transition. (Unless of course I’m filling out a performance appraisal and it’ll give me some bonus points for saying so – in which case I ABSOLUTELY had that as my primary motivator ;?)

One of the things that we do here in the Ocean Information Center is to prototype new information systems. We specialize in creating systems that describe, monitor, catalog and provide pointers to global research projects as well as their data and data products. We research various information technologies and try to build useful systems out of them. In the event that we run into a show stopper with that technology, we sometimes have to revert to another technology that is incompatible with those in use on the server. Whether they be the operating system, the programming language, the framework or the database technologies selected. In these scenarios, it is hugely important to compartmentalize and separate the various systems that you’re using. We can’t have technology decisions for project A causing grief for project B now can we?

One way to separate the information technologies that you’re using is to install them on different servers. That way you can select a server operating system and affiliated development technologies that play well together and that fit all of the requirements of the project as well as its future operators. With a cadre of servers at your disposal, you can experiment to your hearts content without impacting the other projects that you’re working with.  So a great idea is to buy one or more servers that are dedicated to each project…which would be wonderful except servers are EXPENSIVE. The hardware itself is expensive, typically costing thousands of dollars for each server. The space that is set aside to house the servers is expensive – buildings and floor space ain’t cheap. The air conditioners that are needed to keep them from overheating is expensive (my rule of thumb is that if you can stand the temperature of the room, then the computers can). And lastly the power to run each server is expensive – both in direct costs to the business for electricity used and in the “carbon costs” that generating said electricity introduce. I was literally run out of my last lab by the heat that was being put out by the various servers. It was always in excess of 90 F in the lab, especially in the winter when there was no air conditioners running. So my only option was to set up shop in a teeny tiny room next to the lab. Something had to give.

We Don’t Need No Stinkin’ Servers (well, maybe a few)

A few years ago I did some research on various server virtualization technologies and, since we were running mostly Windows-based servers at the time, I started using Microsoft’s Virtual Server 2005. Pretty much the only other competitor at the time was VMWare’s offerings. I won’t bore you with the sales pitch of “most servers usually only tap 20% or so of the CPU cycles on the system” in all its statistical variations, but the ability to create multiple “virtual machines” or VMs on one physical server came to the rescue. I was able to create many “virtual servers” for each physical server that I had now. Of course, to do this, you had to spend a tad more for extra memory, hard drive capacity and maybe an extra processor; but the overall cost to host multiple servers for the cost of one physical box (albeit slightly amped up) were much less now. To run Virtual Server 2005, you needed to run Windows Server 2003 64-bit edition so that you could access > 4Gigs of RAM. You wanted a base amount of memory for the physical server’s operating system to use, and you needed some extra RAM to divvy up amongst however many virtual servers you had running on the box. Virtual Server was kind of cool in that you could run multiple virtual servers, each in their own Internet Explorer window. While that worked okay, a cool tool came on the scene that helped you manage multiple Virtual Server 2005 “machines” with an easier administrative interface. It was called “Virtual Machine Remote Control Client Plus”. Virtual Server 2005 served our needs just fine, but eventually a new Windows Server product line hit the streets and Windows Server 2008 was released to manufacturing (RTM) and shipping on the new servers.

Enter Hyper-V

A few months after Windows Server 2008 came out, a new server virtualization technology was introduced called “Hyper-V”. I say a “few months after” because only a Beta version of Hyper-V was included in the box when Windows Server 2008 rolled off the assembly line. A few months after it RTM’d though, you could download an installer that would plug in the RTM version of it. Hyper-V was a “Role” that you could easily add to a base Win2k8 Server install that allowed you to install virtual machines on the box. We tinkered around with installing the Hyper-V role on top of a “Server Core” (a stripped-down meat and potatoes version of Win2k8 Server) but we kept running into road blocks in what functionality and control was exposed so we opted to install the role under the “Full Install” of Win2k8. You get a minor performance hit doing so, but nothing that I think I notice. A new and improved version came out recently with Windows Server 2008 R2 that added some other bells and whistles to the mix.

The advantages of going to server virtualization were many. Since I needed fewer servers they were:

  • Less Power Used – fewer physical boxes meant lower power needs
  • Lower Cooling Requirements – fewer boxes generating heat meant lower HVAC load
  • Less Space – Floor space is expensive, fewer servers require fewer racks and thus less space
  • More Flexibility– Virtual Servers are easy to spin up and roll back to previous states via snapshots
  • Better Disaster Recovery – VMs can be easily transported offsite and brought online in case of a disaster
  • Legacy Projects Can Stay Alive – Older servers can be decommissioned and legacy servers moved to virtual servers

Most of these advantages are self-evident. I’d like to touch on a little more are the “flexibility”, “disaster recovery” and “Legacy Projects” topics which are very near and dear to my heart.


The first, flexibility, was a much needed feature. I can’t count how many times we’d be prototyping a new feature and then, when we ran into a show-stopper, would have to reset and restore the server from backup tapes. So the sequence would be back up the server, make your changes and then, if they worked, we’d move on to the next state. If they didn’t we might have to restore from backup tapes.  All of these are time-consuming and, if you run into a problem with the tape (mechanical systems are definitely failure prone), you were up the creek sans paddle. A cool feature of all modern virtualization technologies is the ability to create a “snapshot” of your virtual machine’s hard drives and cause any future changes to happen to a different linked virtual hard disk. In the event that something bad happens with the system, you simply revert back to the pre-snapshot version (there can be many) and you’re back in business. This means that there is much less risk in making changes (as long as you remember to do a snapshot just prior) – and the snapshotting process takes seconds versus the minutes to hours that a full backup would take on a non-virtualized system.

Another cool feature of snapshots is that they can be leveraged on research vessels. The thought is that you get a virtual machine just the way you want it (whether it’s a server or a workstation). Before you head out on a cruise you take a snapshot of the virtualized machine and let the crew and science parties have their way with it while they’re out. When the ship returns, you pull the data off the virtualized machines and then revert them to their pre-cruise snapshots and you’ve flushed away all of the tweaks that were made on the cruise (as well as any potential malware that was brought onboard) and you’re ready for your next cruise.

Another capability that I’m not able to avail myself of is the use Hyper-V in failover and clustering scenarios.  This is pretty much the ability to have multiple Hyper-V servers in a “cluster” where multiple servers are managed as one unit. Using Live Migration, the administrator (or even the system itself based on preset criteria) can “move” virtual machines from Hyper-V server to Hyper-V server. This would be awesome for those times when you want to bring down a physical server for maintenance or upgrades but you don’t want to have to shut down the virtual servers that it hosts. Using clustering, the virtual servers on a particular box can be shuttled over to other servers, which eliminates the impact of taking down a particular box. One of the requirements to do this is a back-end SAN (storage area network) that hosts all of the virtual hard drive files, which is way beyond my current budget. (Note: If you’d like to donate money to buy me one, I’m all for it ;?)

I also use virtualization technologies on the workstation side. Microsoft has their Virtual PC software that you can use to virtualize say an XP workstation OS on your desktop or laptop for testing and development. Or maybe you want to test your app against a 32-bit OS but your desktop or laptop is running a 64-bit OS? No worries, virtualization to the rescue. The main problem with Virtual PC is that it’s pretty much Windows-only and it doesn’t support 64-bit operating systems, so trying to virtualize a Windows 2008 R2 instance to kick the tires on it is a non-starter. Enter Sun’s…errr…Oracle’s Virtual Box to the rescue. It not only supports 32 and 64-bit guests, but it also supports Windows XP, Vista and 7 as well as multiple incarnations of Linux, DOS and even Mac OS-X (server only).

What does “support” mean? Usually it means that the host machine has special drivers that can be installed on the client computer to get the best performance under the virtualization platform of choice. These “Guest Additions” usually improve performance but they also handle things like seamless mouse and graphics integration between the host operating system and the guest virtual machine screens. Guest operating systems that are not “supported” typically end up using virtualized legacy hardware, which tends to slow down their performance. So if you want to kick the tires on a particular operating system but don’t want to pave your laptop or desktop to do so, virtualization is the way to go in many cases.

The use cases are endless, so I’ll stop there and let you think of other useful scenarios for this feature.

Disaster Recovery

Disasters are not restricted to natural catastrophes. A disaster is certainly a fire, earthquake, tornado, hurricane, etc. but it can also be as simple as a power spike that fries your physical server or a multi-hard drive failure that takes the server down. In the bad-old-days (pre VM) if your server fried, you hoped that you could find the same hardware as what was installed on the original system so that you could just restore from a backup tape and not be hassled by new hardware and its respective drivers. If you were unlucky enough to not get an exact hardware match, you could end up spending many hours or days performing surgery on the hardware drivers and registry to get things back in working order. The cool thing about virtualized hardware is that the virtual network cards, video cards, device drivers, etc. that are presented to the virtual machine running on the box were pretty much the same across the board. This means that if one of my servers goes belly up, or if I want to move my virtual machine over to another computer for any reason, there will be few if any tweaks necessary to get the VM up and running on the new physical box.

Another bonus to this out-of-the-box virtual hardware compatibility is that I can export my virtual machine and its settings to a folder, zip it up and ship it pretty much anywhere to get it back up and online. I use this feature as part of my disaster recovery plan. On a routine basis (monthly at least) I shut down the virtual machine, export the virtual machine settings and its virtual hard drives, and then zip them up and send them offsite. This way if disaster does strike, I have an offsite backup that I can bring online pretty quickly. This also means that I can prototype a virtual server for a given research project and, when my work is complete, hand off the exported VM to the host institutions IT department to spin up under their virtualized infrastructure.

Legacy Projects

I list this as a feature, but others may see this as a curse. There are always those pesky “Projects That Won’t Die”! You or somebody else set them up years ago and they are still deemed valuable and worthy of continuation. Either that or nobody wants to make the call to kill the old server – it could be part of a complex mechanism that’s computing the answer to life, the universe and everything. Shutting it down could cause unknown repercussions in the space-time continuum. The problem is that many hardware warranties only run about 3 years or so. With Moore’s Law in place, even if the physical servers themselves won’t die – they’re probably running at a crawl compared to all of their more recent counterparts. More importantly, the funding for those projects ran out YEARS ago and there just isn’t any money available to purchase new hardware or even parts to keep it going. My experience has been that those old projects, invaluable as they are, require very little CPU power or memory. Moving them over to a virtual server environment will allow you to recycle the old hardware, save power, and help reduce the support time that was needed for “old faithful”.

An easy (and free) way to wiggle away from the physical and into the virtual is via the SysInternals Disk2VHD program. Run it on the old box and in most cases it will crank out files and virtual hard disks (VHDs) that you can mount in your virtual server infrastructure relatively painlessly. I’m about to do this on my last two legacy boxes – wish me luck!


Most of my experience has been with Microsoft’s Hyper-V virtualization technology. A good starter list of virtualization solutions to consider is:

Hopefully my rambling hasn’t put you to sleep. This technology has huge potential to help save time and resources, which is why I got started with it originally. Take some time, research the offerings and make something cool with it!

CTD and Dissolved Oxygen Measurement via Winkler Titration

Last fall I was on the RV Hugh R Sharp for a short research cruise out in the Delaware Bay. We were sharing the Sharp with chief scientist Dr. George Luther, who was doing a mooring deployment that contained a dissolved oxygen sensor (among several other sensors). As part of the calibration check to make sure the readings were correct while we were on station, Dr. Luther did several CTD casts to take some water samples at various depths. I snagged the trusty video camera and got him to explain what he was doing and why.

To verify the accuracy of modern electronic oxygen sensors, oceanographers still verify the dissolved oxygen concentration using what’s called the Winkler test for dissolved oxygen. Dr. Luther showed the process of fixing oxygen into a MnOOH solid, which is then measured by the Winkler titration. This allows scientists to compare the oxygen readings they’re getting now with historical records of oxygen levels going back to the late 1800’s (an important thing to do when you’re trying to determine long-term trends by comparing historical records against more recent observations). It also allows them to verify the readings that they’re getting from modern electronic oxygen sensors.

I’ll sneak down to Dr. Luther’s lab soon and video the second part of the process, where they add the additional chemicals to the mix and determine the actual concentration of dissolved oxygen. Thanks again to Dr. Luther for taking time to explain the process.

Clean Energy from the Ocean: The Mid-Atlantic Wind Park

Drew Murphy, Northeast Region President of NRG Energy Inc., presented the August 19, 2010 lecture in the University of Delaware’s Coastal Currents Lecture Series. NRG owns offshore wind energy developer NRG Bluewater Wind. Mr. Murphy’s excellent presentation on the company’s planned “Mid-Atlantic Wind Park” project off the Delaware coast provided guests with a broad perspective on the challenges to as well as the economic, environmental and energy-related benefits from developing an offshore wind park.

His presentation helped answer questions I hear quite often: “How can offshore wind be developed in the US?”, “Why is offshore wind a good source of clean and reliable energy?” and “How are they able to install wind turbines so far out in the water?”.

Before this talk, I had no clue about some of specialized vessels and equipment used in the offshore wind projects.  Thanks to Mr. Murphy I now have some insights on how it might be accomplished, and why it would be good for Delaware and for the entire country.

I appreciate NRG’s permission to post this interesting presentation online. You can find out more about the company’s offshore wind and other clean and renewable energy development efforts by visiting

Wicked Cool Slocum Electric Glider 101

Last week we had just received the UD-134 glider (aka the “Blue Hen”) from two tours of duty in the Gulf of Mexico in collaboration with IOOS and Rutgers University for the Deepwater Horizon Oil Spill Response project.  To prepare for an upcoming Antarctic mission, we needed to get some work done on UD-134 at the source – Teledyne Webb Research in Massachusetts. Since we were only five hours south of Webb at the time, I loaded the Zune HD (with purely educational podcasts of course – in this case Security Now) and it was road-trip time for me and two of the students from the ORB lab.

The students who went with were really excited to get to learn from the masters while we tore down UD-134 at Rutgers. (For those new to gliders, Rutgers is the undisputed kings of the glider realm, they’ve been flying them since, like forever). One of the students who came with was a summer intern who was charged with learning how to pilot the Glider over the summer. Because of the last-minute deployment of UD-134 in the Gulf, he had lots of pilotting time on a simulator, but not so much hands-on with real Gliders. The other student was a new grad student who would be responsible for ingesting and processing glider data, so she was looking forward to the trip as well.  When we decided at the last minute to head up to Webb Research to deliver the components, the intern said he “felt like Willie Wonka with the winning ticket to tour the chocolate factory”. He was definitely not disappointed as Peter Collins met us at the doors of Webb and gave the students and I the grand tour.

Peter Collins (aka “Texas Pete” for this post) donned his cowboy hard hat and headed to the ballast tank with me and a couple of our students last week to do a quick talk for Ocean Bytes.  Pete gave a quick introduction to the Slocum Electric Glider – an Autonomous Underwater Vehicle (AUV) or Underwater Glider that is made by Teledyne Webb Research. Take note that the glider that Pete has in front of him as it is a tad different from most in that it has two science bays (there is usually only one). This one is being fitted with a Photosynthetically Active Radiation (PAR) sensor and a FIRe sensor  (remember Lauren’s video?) from Satlantic. I’ll hand you to Peter now so he can discuss what a glider is for and how it works…

In addition to the lineup of first generation gliders, we were introduced to the second generation gliders that are just now being manufactured – also called the “G2” gliders.  I’ll try to cover everything that we learned about the G2 systems in a future post.

Thanks again Peter for the awesome hospitality and for taking such great care of us!

Note: Getting lots of inquiries as to where one might obtain “Cowboy Hard Hats” – Peter provided a couple of links to possible suppliers – Link 1 and Link 2.

Turning Satellite Data into Google Earth Maps: It’s Easy!

As a new grad student in the ORB (Ocean exploration, Remote Sensing, and Biogeography) lab at the University of Delaware under Dr. Matthew Oliver, I (along with my cohort Danielle Haulsee) were tasked with learning to write code in R.  R is a language that enables statistical computing and making graphical displays. To some of you this may sound basic, but having no prior programming experience it was a little overwhelming at times.  After getting the basics down, we then started pulling sea surface temperature and chlorophyll data from NASA’s Goddard Space Flight Center (GSFC) MODIS Aqua satellite.  This isn’t just any temperature and chlorophyll data either, it’s real-time and updated everyday!  From this we were able to create maps on Google Earth, which is a great platform for viewing and interacting with multiple data layers on a global scale.  This allows us to easily distribute NASA’s data for ocean planning.  These overlays along with others were also able to assist in planning Slocum Glider missions in areas surrounding the Gulf oil spill.

In our Google Earth maps, we created 1, 3, and 8 day averages that reflect the current conditions in the ocean.  Each day our code downloads the lastest satellite data that has been updated on NASA’s website and then it is averaged along with the previous days to create an average. The 1 day average maps are patchy due to the fact that the satellites can not see through the clouds.  Therefore, the 8 day averages make for a more complete and accurate picture.  For higher resolution images, we created smaller maps of just California, the East Coast and even Antarctica!  These locations correspond to areas that we conduct further research in.   Google Earth was interested in our overlays so check out the Google Earth Gallery for sea surface temperature and chlorophyll concentrations near you!

APEX Floats 101

Some students and I went on a road trip to Rutgers University in New Jersey and then ended up heading up the coast to East Falmouth, Massachusetts to meet with the fine folks at Teledyne Webb Research. During a tour of the facilities, we were introduced to the APEX floats, whose data (through the ARGO program) the students were accessing for various projects in the ORB lab. James Truman, an engineer at Webb, graciously agreed to do a quick 101 overview of the APEX on camera.

Profiling floats like the APEX are able to sink or float by varying their internal volume. A standard equation for Buoyant Force is:

F(buoyant) = –pVg

where p=density of the fluid, V=volume of the object (in this case the float) and g=standard gravity (~9.81 N/kg). By adjusting the internal volume of the float by pumping fluids in and out of the interior, we are able to make the device either more or less buoyant.  There’s a really neat cut-away animation on the UCSD Argo site that shows the guts of the units quite well.

Float technology has evolved rather quickly, with the original floats only serving as a mechanism for tracking deep ocean circulation – also called Lagrangian Drifters or ALACE (Autonomous Lagrangian Circulation Explorer) floats. They would pop up to the surface and transmit back their positions and the temperature at depth.  Using the drifters last known position and its new position gave scientists an idea of how fast and in what direction the deep ocean currents were moving. Later these drifters were equipped with CTD sensors (Conductivity-Temperature-Depth) and they took sensor readings all the way up the water column and transmitted a “profile” reading back to the mother ship. These were called PALACE or “Profiling ALACE” floats (see WHOI’s site on ALACE, PALACE and SOLO Floats).

These predecessors bring us to the modern world of the ARGO Float fleet, which consists of APEX floats from Webb Research, the PROVOR floats from MARTEC and the SOLO floats from Scripps Institute of Oceanography. My understanding is that these floats dive to a depth of around 2000 meters and drift for 10 days and then float to the surface, profiling the water column along the way. They then communicate their readings via Iridium Satellite or the ARGO system and then dive again for another 10 days or so.

NOAA has a site called ARGO KMZ Files that makes it really easy to get started tracking ARGO floats and their data. You just need to install Google Earth first – which can be downloaded at: Below is a screen shot of the ARGO floats in the Atlantic.


Thanks again to James Truman and the awesome people at Webb Research for taking us under their wing and spending a lot of time showing us the ropes. It was an excellent experience that the students are still talking about.

REU Intern on FIRe

It has been a great pleasure to have Lauren Wiesebron on the Lewes campus this summer. Lauren is a summer intern from Johns Hopkins and is here as part of an NSF funded Research Education for Undergraduates program (aka REU). For her summer project, Lauren worked in Dr. Matt Oliver’s lab (the ORB Lab) and chose “photosynthetic efficiency” as her summer research project.  To gather data for her project, she set up shop in a portable scientific lab van on the dock in which she set up a Fluorescence Induction and Relaxation System (FIRe Sensor), a Coulter Counter and a Submersible Ultraviolet Nitrate Analyzer (SUNA).  Dr. George Luther’s lab was also taking readings from the same lab van and Lauren included some of their data into her analysis.  The past few weeks Lauren analyzed the results and tomorrow she will present a talk called “Conditions for increased photosynthetic efficiency in an estuarine area”.  Here is a walk-through of the lab van that I did earlier this week with Lauren.

Excellent work Lauren and we hope to see you back here for Grad school!

Reson Seabat 8101 Multibeam Echosounder

I lucked out not too long ago and happened to be at the right place at the right time (usually it’s the other way around). I ran into Brian Kidd, our resident expert on Multibeam Echosounder systems (also known as a Swath system) and he said he just happened to have the multibeam components apart for servicing.  I ran to get my camera and followed Brian around and asked all kinds of insightful questions (of course).  Echosounders are a version of Sonar (which stands for SOund Navigation And Ranging) wherein a transmitter emits a sound pulse downward into the water and then the amount of time that the pulse takes to come back to the ship is measured. A single beam echosounder will shoot a beam straight down and and use a single receiver to receive the pulse that bounced back from the bottom of the body of water. This is used to determine how deep the water is beneath the ship. A multibeam echsounder will emit a broad pulse of sound into the water and then will use multiple receivers aimed at various angles to measure the reflected sound.  These times are then processed by the computer to generate a “swath” beneath the ship and at some distance to either side which shows the height of the sea floor.  Moving the ship forward will give a band of height information beneath the ships track, and by moving in parallel, overlapping tracks, an ever-growing patch of sea floor heights can be mapped.

Okay, I’ve exhausted my general knowledge of the subject.  I’ll let Brian take the reigns and kick back and learn from the master…


At the 2009 RVTEC meeting, I sat in on the Swath/Multibeam workshop and updated the Swath/Multibeam section of OCEANIC’s International Research Vessels database for the UNOLS vessels. There were some huge swath transducer arrays being discussed at the workshop on some of the deep water vessels, so I was pretty surprised to see just how compact the shallow water multibeam systems can be. In the second part of the video, Brian shows us what the Reson Seabat 8101 transducer assembly looks like and how they mount the unit to the ship.


Many thanks to Brian for putting up with me and for taking time to share his knowledge of the Reson Seabat 8101 Multibeam System (PDF of specs here) onboard the RV Hugh R. Sharp.

Why two videos and not one? Apparently YouTube has a 10 minute max length for uploaded videos, so I broke the video into two parts.  Part 1 covers the monitoring and display station and Part 2 covers the mounting infrastructure and the transducer assembly. This works well for me as I doubt that too many people are able to sit through a 20+ minute video anyways, so breaking it up into two more digestible chunks is better in my opinion.

Portable “Castaway CTD” by YSI

How many times have you been standing on a dock or a bridge or even out on a kayak or large research vessel and found yourself wondering what the temperature, sound and salinity profile was for the water beneath you?  Well, you need wonder no more!

Here’s the last of the videos from the BEST Workshop last week. I talked with Chris from YSI about their new product the portable “Castaway CTD”. Just a tad larger than your standard handheld GPS, the Castaway CTD is a battery operated unit that allows you to do on-the-spot CTD casts at depths up to 100 meters. Chris did a quick rundown of the unit and its operations and then we stepped inside to see what software they are supplying to pull the data off the units, manipulate it and export it for use. Again the venue was quite noisy, so my apologies for the poor sound quality.

Specs for the unit are available on the YSI site, just click on the “Specifications” tab. The unit runs on two AA batteries, which they claim will run the unit for over 40 hours.  Communications with the unit are via an internal BlueTooth radio and the unit ships with a tiny USB BlueTooth dongle for you to use in your computer. The recorder comes with 15MB of storage, which they claim will store over 750 casts. It contains a built-in GPS so that you can get a geographic fix on your location within 10 meters and it will record the following:

  • Conductivity
  • Pressure
  • Temperature
  • GPS
  • Salinity (derived)
  • Sound Speed (derived)

A PDF of the whitepaper for the unit can be downloaded here.

Scanfish Undulating Towed Vehicle

Lucky me, I happened to be in the right place at the right time.  I was over at the CEOE Marine Operations Building and I ran into Brian Kidd, a marine technician aboard the RV Hugh R. Sharp. Brian is the resident expert on multibeam echosounder systems and he agreed to talk on camera about some of the data acquisition systems that he’s involved with. While we were talking I noticed that the Scanfish was opened up and getting prepped for an upcoming science mission, so Brian volunteered to talk about the Scanfish as well.  The segment on the multibeam is a tad longer as we had to do some travelling around the ship and ashore to cover the various components as it was being serviced. The multibeam video will be posted shortly has been posted and is available here


The Scanfish was originally a product of GMI of Denmark. GMI was purchased by EIVA, who integrated the Scanfish into their suite of hardware and software solutions in support of marine science and surveying. EIVA hosts a PDF showing specs for the Scanfish MK II on their site. The MK II looks like it is the equivalent of the Scanfish we discussed with Brian. EIVA also provides smaller Scanfish units including the Scanfish Mini and the Scanfish MK I.

The Scanfish is “flown” and monitored via a conductive cable that feeds data and parameters back to EIVA’s “Flight Software” – which the technician uses to control the Scanfish, the winch and to display and log the data being collected.

In addition to housing a CTD (which stands for Conductivity + Temperature + Depth) sensor, the Scanfish also supports the following optional sensors:

  • Fluorometer
  • Turbidity sensor
  • Transmissiometer
  • Oxygen sensor
  • Optical Plankton Counter
  • ADCP (Acoustic Doppler Current Profiler)
  • Video Camera
  • Other customer supplied sensors

Hydronalix Autonomous Science & Security Boat (HASS)

While I was at the BEST workshop, I had an opportunity to talk with Tony about H.A.S.S. – the Hydronalix Autonomous Science & Security Boat. You may have heard of the Hydronalix for another product that Popular Science did a write-up on recently – E.M.I.L.Y. – which stands for “EMergency Integrated Lifesaving lanYard” (see Robo-Baywatch article).

HASS is a small hydro-jet powered boat a tad over 4 feet long. Its claim to fame is that it has the ability to approach marine mammals at a much closer proximity than is allowed via standard surface ships and that it will have much less of an impact with its presence due to its small size and electric drive. I shot a video with Tony (my apologies for the poor sound quality – lots of background noise).


HASS has an impressive list of features and capabilities which include:

  • Hydro-jet powered to prevent damage to marine animals from a propeller
  • Remotely Operated Camera
  • Side-scanning Sonar
  • Electric powered (batteries can be changed in ~5 minutes)
  • Up to 40 mph speed, with 20-30 typical
  • Endurance from 2-7 hours depending on operating conditions
  • Wireless control via a nearby ship (~0.5 mile range) or support for control via iridium satellite phone



As promised, I’m starting to weed my way through a couple of the videos that I shot at the BEST Workshop in Oxford MD (see previous post).  My apologies for the poor sound quality – we had a ton of people on the back deck, which created a lot of background noise. Not bad considering I shot it with my cheapy $120 HD video camera though.

This is a recording of a discussion that I had with Chris from YSI about their EcoMapper AUV. It appears that YSI has taken the OceanServer AUV and loaded it with an impressive array of sensor technologies.  The system runs Windows XP embedded on the AUV and it simply appears as another computer on the network when you’re interacting with it via wireless communications.  I’ll shut up now and let Chris do the talking, he’s the expert.


Bay & Estuarine Sensor Technologies (BEST) Workshop

I was able to attend the 2010 Bay & Estuarine Technologies Workshop (BEST) last week. It was held July 27-30 at the Environmental Science & Training Center at COL in Oxford, MD. Also called the “BEST by the Bay” workshop, this years workshop is a follow-up  to last years “AUVs in the Bay I” which was held in June of 2009.

AUV on Display at BEST

AUV on Display at BEST

I am a first-time attendee and it was well worth the trip. There were some awesome technologies on display at the center, including Buoys, Autonomous Underwater Vehicles (AUVs), Unmanned Surface Vehicles and other technologies that allow for limited human interaction in the collection of water quality data in remote regions of the bays and estuaries. The workshop focused on “the application of sensors used in estuarine systems and storing the data using IOOS protocols so it can be used in environmental forecasting models, such as hypoxia”.

I was only able to attend the Wednesday portion of the program, so I’ll only write about what I saw. After walking around and drooling on many of the AUVs and ROVs on display (lucky they are all watertight) we went around the room and introduced ourselves and the meeting started. 

Kids Learning from Art Trembanis

Kids Learning from Art Trembanis

Attendees included a wide gambit of technology vendors, managers, scientists and even educators, students and boy scouts. The head honcho Doug Levin of NOAA quickly took charge and got all of the AUV operators who were going to run a mission on task to program their AUVs for the task at hand. In addition to the science mission, the day also included a keynote and several other presentations throughout the afternoon.

I took a few minutes and recorded a few Question & Answer sessions with some of the attendees.  I hope to cobble those together and get them online sometime soon.

Welcome to Ocean Bytes!

Welcome to the Ocean Bytes Blog.

This is our first post. Kick back, buckle up and enjoy the adventure!

© 2024 Ocean Bytes Blog

Theme by Anders NorenUp ↑