Category: IT Stuff

Making the Grade with SSL

Disclaimer: These are the steps that I followed. Please do due diligence and investigate fully before you attempt to modify your own server. Your mileage may vary…

I have a number of websites that run on Windows servers running Internet Information Server (IIS). One of the requirements I pretty much insist on is that if a site allows you to log in, it has to have an encrypted means of communications. We do that by generating a certificate request inside of IIS Manger and sending the certificate signing request (CSR) off to be signed by a certificate provider like GlobalSign, Digicert, etc. The certificate provider with sign the certificate and send you back a blob of text that you can save in a text file with a *.cer extension at the end. You then open up IIS Manager, select the server and Complete the CSR, which installs the certificate on the server. You can then edit the bindings for the website that you want to enable SSL on, add an HTTPS binding and select the certificate.

Easy peasy…you’re done, right? Unfortunately, not quite.

All kinds of security buzz these days about SSL work-arounds and tricks to reduce the security that they provide, funky names like the BEAST attack, POODLE, FREAK, etc. So we want to make sure that the ciphers and encryption techniques that we use are as safe as possible. There are tools available on the web that will hammer your SSL implementation and tell you if there might be any weaknesses. One such online tool is the Qualys SSL Labs test – available at:

I ran the SSL Labs scan on a Windows Server 2008 R2 box running IIS 7.5 that I’d just installed a certificate on. The results were not very good with the out-of-the-box settings – an “F” (see below)

SSL Labs Initial Scan

SSL Labs Initial Scan

The report gives us some feedback on what they think the deficiencies are in your site’s SSL configuration and some links to some more info. In the case of this Windows 2008 R2 server, it’s identified:

  • SSL 2 is supported – it’s old, it’s creaky, and it’s not to be trusted
  • SSL 3 is supported – (see above) and it’s vulnerable to POODLE attack (oh noes – not poodles!)
  • TLS 1.2 isn’t supported – TLS 1.1 isn’t either, but they leave that out, we’ll fix that too
  • Some of the SSL cipher suites advertised by the server as supported are either considered weak, or they don’t support Perfect Forward Secrecy

The first three items we can fix by editing the registry, the last item requires us to modify one of the group policy settings. The standard disclaimers apply – don’t make any changes to your system unless you are a highly trained professional who understands that these changes may cause your system to no-worky and make sure you have a full backup of the system so that you can restore it if things go sideways.

To disable SSL 2.0 & 3.0 and to enable TLS 1.1 & 1.2, I had to run Regedit.exe and go to:


You’ll probably only see one key under Protocols – SSL 2.0 and in my case it only had a Clients key.

SSL 2.0 Initial Values

SSL 2.0 Initial Values

I created a Server key under SSL 2.0 and added a DWord name of “DisabledByDefault” with a data value of “1”. Now the server won’t attempt SSL 2.0 connections.

Disable Server SSL 2.0

Disable Server SSL 2.0 and 3.0

To disable SSL 3.0 create a similar SSL 3.0 key under Protocols, create a key called Server under it and add a Dword with name DisabledByDefault with data value “1” there as well. No more SSL 3.0 served up now.

To enable TLS 1.1 and 1.2, you follow similar steps of creating TLS 1.1 and TLS 1.2 keys under Protocols and creating a Server key under each. This time, however, I added two Dword values under each of their Server keys. One named DisabledByDefault with a data value of “0” (we don’t want them to be disabled) and then add a second DWord named “Enabled” with a data value of “1” (the default is “0”, so you’ll need to change the value to “1” once you create the Dword entry).

Keys to enable TLS 1.1 and 1.2

Keys to enable TLS 1.1 and 1.2

I closed Regedit – no need to “save” as it auto-saved for me.

Next we need to edit the group policy setting that determines which SSL cipher suites that the server will offer up. To edit the group policy on my stand alone server clicked Start -> Run and typed “gpedit.msc” to open the Windows group policy editor snap-in. The entry we want to modify is under:

Computer Configuration -> Administrative Templates -> Network -> SSL Configuration Settings

The entry we want to modify is “SSL Cipher Suite Order” which was “Not Configured” by default. This means that it falls back to the Windows Server default ciphers and ordering.

SSL Cipher Suite Order Default State

SSL Cipher Suite Order Default State

To only serve up ciphers that aren’t weak and that support Perfect Forward Secrecy, I had to choose a subset of ciphers. Luckily, Steve Gibson at GRC shared out a list of ciphers that met that criteria on his site at:

One caveat is that the list that you paste into the group policy editor has to be a single line of comma separated values. No carriage returns or the like. I copied the text from Steve’s site into Notepad and then hit Home + Backspace for each line starting at the bottom until I got a single line of comma-separated values.

Cipher suites in a single line

Cipher suites in a single line

Click the “Enabled” radio button, highlight the default values in the SSL Cipher Suites textbox and delete them, paste in the new values from Notepad (remember, the single line, no line breaks rule), click Apply and Save and we’re done.

SSL Cipher Suite Order Enabled

SSL Cipher Suite Order Enabled

I then closed the group policy editor MMC snap-in, rebooted the server (it won’t take until you reboot) and then went back and re-ran the Qualys SSL Labs test by clicking the “Clear Cache” link. It caches the results from the previous scan, so unless you click the link, you’ll just be looking at the previous scan results.

Qualys SSL Labs A Grade

Qualys SSL Labs A Grade

Voila! We’ve gone from an “F” grade to an “A” grade. Whether the site is actually more secure or not is beyond the scope of this blog post, but if I am being asked to serve up an SSL secure site and it gets an “F” there would be some ‘splainin’ to do.

Hopefully this helps with understanding what steps were required for me to get the “A” grade.

DeepZoom of Endeavour on the Launch Pad

[ shut down, so my DeepZoom image is no longer available. I’ll re-create it soon…]

(The image above is dynamic and zoomable, play around with it some. Mouse over it and use your scroll wheel, click and drag around on the image, or click the plus and minus buttons, even go full screen with the button on the lower-right-hand corner – have fun with it!)

One of the challenges of taking photos of special events and places is that they always look so small and lacking in visual acuity and detail. You take a picture and then later, when you’re looking at it, you feel underwhelmed that it just doesn’t capture the clarity that you remember seeing.

Two technologies that I cobbled together to create the zoomable picture above of the Endeavour (STS-134) on the launch pad are Microsoft ICE (Image Composite Editor) and DeepZoom to tile and create javascript that allows you to zoom in and out of the image to enjoy much more detail. You can learn more about Microsoft ICE via this HD View blog posting, including details on what it can do as well as download links (it’s free!). I used my digital camera to zoom into the shuttle while it was on the launch pad post RSS shield retraction and took a matrix of photos, making sure that each photo overlapped with the others a little bit so that ICE could stitch them into one large hi-res photo. Since we’re limited in the number of pixels we can display on a screen, I leveraged DeepZoom technologies to break the image into a series of sub-images and to create javascript to swap in higher-resolution tiles as you zoom into the image. Similar to what you find when you zoom into a Google Map image or the like.

Microsoft had made it quite easy to automagically create DeepZoom images (based on SeaDragon technology) via their site. All I had to do was upload the composited image that I’d created using ICE to a web server, feed the URL of the large graphic image file and then copy the embed code from the results and paste them into this post after the file had been processed. The resulting javascript and tiles that were created are hosted on their site, so I didn’t even need to include them in my image file holdings.

I hope this helps in two ways:
A) Appreciate the awesome site that we were seeing at the STS-134 NASATweetup
B) You now know how to fish (ie: how to create cool visualizations like this). Have at it!

ps – If you want to pull down the full hi-res image that was used to create this so you can print out an awesome poster of the shuttle on the launch pad, you can get it here. Enjoy!

OSU Ships Underway Data System

One of the highlights of going to the RVTEC meeting is getting to hear about some of the cool projects that are underway at the various institutions. One talk that caught my attention was the SUDS system, an NSF sponsored project that was given by the techs at Oregon State University.

I talked David O’Gorman and Toby Martin into doing a quick rundown on their SUDS system on camera during one of the breaks. SUDS is an acronym for the Ships Underway Data System, which consists of software and two data acquisition boards that they designed in-house – one analog and one digital. Each board can be programmed with metadata about the sensors that are attached to them. When the boards are plugged into the ships network they broadcasting XML data packets which include both data and metadata about the data via UDP for a back-end data acquisition to capture and store. For redundancy, there can be multiple acquisition systems on the network as well I’m told.

The data acquisition cards can be either powered directly or via POE (Power Over Ethernet). They can also supply power to the sensor if needed. The digital cards can accept RS232 and RS485. The analog has 4 differential input channels which can do 0-5v on two of the channels and 0-15v on the other two and range from 600Hz to 20kHz input signals.

Their website has links to a PDF of the presentationthey did at the 2010 UNOLS RVTEC meeting as well as various examples of data packets that the system broadcasts. Definitely something that could be quite useful to handle the ever-changing data acquisition needs on today’s research vessels. I look forward to learning more about the SUDS system in the days to come.


RV HSBC Atlantic Explorer

RV HSBC Atlantic Explorer

Just got back from the 2010 UNOLS RVTEC meeting, which was held at the Bermuda Institute of Ocean Science (BIOS) – home of the RV HSBC Atlantic Explorer.

(Acronym Police: UNOLS = University-National Oceanographic Laboratory System and RVTEC = Research Vessel Technical Enhancement Committee).

For those unfamiliar with RVTEC, it is a committee organized around 1992 to “provide a forum for discussion among the technical support groups of the National Oceanographic Fleet” in order to “promote the scientific productivity of research programs that make use of research vessels and oceanographic facilities and to foster activities that enhance technical support for sea-going scientific programs” as listed in Annex V of the UNOLS charter. Membership is extended to UNOLS member institutions but “Participation shall be open to technical and scientific personnel at UNOLS and non-UNOLS organizations”.

The meeting agenda was pretty intense and we were pretty much straight out from Monday through Friday afternoon. There were a lot of scary smart people in the room doing some pretty amazing things in support of science operations at their respective institutions. I tried to compile a list of Tech Links on the site to make it easier to find some of the various resources that were discussed at the meeting. I did the same thing at last years RVTEC meeting in Seattle but some additions and corrections were needed based on feedback from the members. I’m hoping that I’ll be able to obtain funding to attend next years meeting and perhaps the upcoming Inmartech meeting (look for a post on Inmartech soon).

I shot some video, made some fantastic contacts and had some interesting discussions at this years RVTEC meeting. If all goes smoothly, I’ll have a couple of new blog entries online this week to help share some of the wealth of knowledge.

3DVista Panoramic Tour of the Sharp

I tinkered around with a demo copy of the 3DVista Stitcher and 3DVista Show 3.0 to push its capabilities a tad. I touched on the packages in a previous blog post about the Global Visualization Lab where I did a simple panorama of the room. The wheels started turning and we decided to push the envelope a little and create a series of panoramic views of the RV Hugh R Sharp as a proof of concept for an online virtual tour of a research vessel.

Panoramic Tour of the RV Hugh R Sharp

Click on this image to visit the proof-of-concept panorama…

The image above is a screen shot of the proof-of-concept panoramic tour we came up with. Click the image above or this hyperlink to visit the actual panoramic tour. The pane on the left shows an interactive panorama of the various points of interest on the ship. The right-hand pane shows a scan of the deck and compartment that the panorama represents. If there is no user action, the tour will cycle through a complete 360 view of each panorama and will move onto the next panorama in the list if nothing is clicked. There are two drop-d0wns to the right, one above the deck layout to select a specific panorama and one below it to select a specific panorama.

A really cool feature of the product is the ability to take the panorama full-screen for a more immersive experience. To do so, just click on the arrow button in the top-right-hand corner next to the question mark symbol. Once in full-screen mode, you can easily cycle through the various pano’s by mousing over them near the bottom of the screen.

The 3DVista Show software allows you to insert hot-spots into the panorama’s as well that can either link to other pages/sites or to include an audio clip into the mix. This makes it quite easy to include additional information about a specific area or feature. I inserted an animated arrow pointing to the Multibeam Operator Station on the Main Deck -> Multibeam Tech Area that links out to the Reson Seabat 8101 Multibeam Echosounder posting.

Multibeam Tech Pano

The mind races with the various uses for this type of technology. It allows for mobility impaired individuals and class groups to tour a space that they’d ordinarily be unable to access. It also allows scientists to “look around” and get a feel for the spaces that they’d be using when they come onboard a vessel. For a future project, I’d like to get support do some panorama’s both inside and outside of the various UNOLS lab vans that would allow scientists to virtually stand in the lab vans and walk around them to see how they’re laid out. 3D panorama’s of research sites in remote locations like the arctic and antarctic also come to mind as does tours of mineral sample and other collections with hotspots included for the various specimens for links to additional information. The application of this tech abounds.

I talked with the folks at 3DVista and it looks like they offer a 15% academic discount for the software so be sure to ask about if if you’re going to purchase it. They also list a one-shot 360 degree pano lens and adapters to make shooting the digital pics a little easier. We used a 180 degree fish-eye lens for our pano shots, which means we did 3 shots at each location 120 degrees off from one-another and stitched them together with the 3DVista Stitcher program.

Many thanks to Lisa Tossey for taking the photos and getting this project rolling. I posted this as an unpolished proof-of-concept version. I look for the ready-for-prime-time panorama that she comes up with for the CEOE site. I also look forward to seeing any cool panoramas that are out there for research projects. Be sure to share your links.

How to Construct a Global Visualization Lab

My apologies for how long it took to get this up. I promised our colleagues at the Xiamen University that I’d put up the complete specs for the Global Visualization room – a component of Dr. Matt Oliver’s ORB Lab and the pesky day job kept getting in the way.

Panorama Fish Eye LensI originally tried a video walk-through of the GVis Lab but it ended up being a lot of panning and zooming around, which I didn’t really care for. Instead I got to try out a fancy digital camera with a 180 degree fish-eye lens the other morning, which I used to shoot three shots of the room 120 degrees apart from each other. I used a software package called 3DVista Show to stitch the fish-eye pictures into a panorama image, which I uploaded to a free online hosted tour on their site. Once I got through uploading the image, the service provided an iFrame string that I included in the post to embed the panorama project. Be sure to click the full-screen icon (top right-hand arrow next to the question mark) to see the panorama a little better.

As you pan around the room, you’ll see the major components of the lab, which are:

The Dell Precision T7500 workstation was selected because it was one of the few systems that was capable of handling (2) PCIe x16 graphics cards simultaneously. We started with one graphics card with the expectation that it could handle the video workload, but wanted the option to add another graphics card in SLI mode to boost graphics performance. So far we haven’t needed a second video card as everything runs quite smooth under Windows 7 x64 as the base operating system and running Google Earth Professional.

The nVidia graphics card has two DVI outputs. One output is fed into the VWBox 133A video splitter, which spreads the 4300×2100 signal across the (9) monitors in the 3×3 monitor array. The VWBox also allows us to “subtract out” the bezel, which eliminates a few lines of video where the bezels are – making for no stepping in diagonal lines or graphics. The 460UX-2 Samsung monitors are all 1920×1080 (1080p) monitors with an 11mm bezel on all four sides. This is the smallest bezel monitor that was available when we built the wall. For Google Earth and other high-resolution work, the display is fantastic, however a second monitor was added to display lower-resolution content at a larger size and for Powerpoint presentations and the like. As small as the bezels are, they may cause some readability problems for text that happens to line up with them such as bulleted text on a Powerpoint slide and text in general. To eliminate this possibility a second large screen monitor was added so that this type of content can be dragged over to it. The second DVI output drives the 60” LCD display at the right-hand side of the room at 1920×1080 resolution. Windows treats the two monitors as one large virtual display, so content can be easily dragged from the large multi-screen display to the smaller 60” LCD and back.

We wanted the ability to present and control the system from anywhere in the room, so the RF Go Mouse and keyboard were selected. The RF dongle allows us to stay connected from up to 100’ away from the computer, which covers the entire lab and beyond. We tried other wireless keyboards and mice but they quickly lost their connection when they were 10-15 feet away. The 3DConnexion 3D Space Navigator makes it easy to manipulate the Google Earth application, but it is a USB device (no wireless equivalent available yet). To allow us to stretch the Space Navigator anywhere in the room, a USB extender was used to allow us to connect a Cat5 cable as an extension cord for the controller. The same extender was used to allow for placement of the Orbit cam on the opposite side of the room (next to the 60” LCD display).

The Orbit Cam is intriguing as it has a stepper motor in the base which allows the operator to turn it left and right. The auto-focus zoomable lens is able to be moved up and down as well. This allows the operator to pan and zoom anywhere in the room when we’re connected to another researcher or student via Skype or other teleconferencing software.

There is a photo below of me standing next to the multi-display wall with the CEOE website maximized on it. This shows the uber-high resolution of the display and some of the issues that just having it alone (no 2nd display) could cause. The first such monitor that we put in was an 82” Mitsubishi rear-projected LCD display. We ended up returning that display, even though it was larger, because it just wasn’t bright enough. It looked extremely dark when sitting next to the much brighter Samsung LCD display wall.



Video Wall Scale

Video Wall Scale

Monitor Wall Mounts

Monitor Wall Mounts

Sharp Aquos 60 inch LCD

Sharp Aquos 60 inch LCD

Logitech Orbit cam

Logitech Orbit cam

USB to Cat5

USB to Cat5

3D Space Navigator

3D Space Navigator

RF Keyboard and Mouse

RF Keyboard and Mouse

Pyle Amp

Pyle Amp

VWBox 133A

VWBox 133A

Dell Precision T7500

Dell Precision T7500



I continue to watch the professional display manufacturer sites for bezel-less LCD displays, which would be my only upgrade that I could imagine for the site. If you run across a 46”+ 1080p zero-bezel display, be sure to send me a link.

The Chief Fusion adjustable wall mounts were quite handy for making minor tweaks to the monitors. It seems that no matter how well you measure, you can never get the displays just perfect, so having the ability to micro-adjust them was quite handy. To allow us to lag-screw them to the wall pretty much anywhere (whether there is a stud or not) we lined the back wall with plywood across the entire wall span and then layered the front with drywall for a finished look. Later on, if we decide to increase the number of monitors into two 9-monitor display arrays, it would be easy enough to add another graphics card, 9 monitors and a second VWBox.

The big secret to turning the project from just a vision to an awe inspiring reality was our most excellent facilities guys and gals. Without their expertise and attention to detail the room could have turned out just ho-hum. They took our ramblings and descriptions of how we’d like things to look and made it come to life. Kudos to them for the room turning out as nice as it did.

Hopefully the information provided here will allow you to build-up your own visualization wall. If you have any questions or comments, please feel free to post them to the site.

Small & Mighty Mini-Top Barebones NetPC

What came in the box

MiniTop Contents

I thought I’d take a minute to share some info on the small and mighty Mini-Top barebones system from Jetway Computer. (Not to be confused with the Small & Mighty Danny Diaz ;?) This unit is basically the guts of a netbook but without the screen so I’ll call it a NetPC. We are thinking about introducing them into the computing site here at work and I was pretty impressed by its feature set and tiny size. Keep in mind that there are several models of ITX barebone systems to choose from over at Jetway. We opted to go with the model JBC600C99-52W-BW, which retails for about $270 at NewEgg. The “-BW” at the end means that it ships with a metal bracket (shown in front of the included remote in pic above) that will allow you to mount the unit to the VESA mounts on the back of most LCD monitors.

Minitop size photo

Smaller than my hand

Since the unit is so small (see pic to the right) this allows you to tuck it it out of the way quite easily behind a monitor. It also comes with an angled metal bracket that allows you to stand it up on its end and stick-on rubber feet in case you want to lay it on its side. Note that this is a “barebones” system, which means that it’s up to you to add the memory (up to 4Gigs of RAM), a single interior hard drive (2.5″ SATA) and a monitor to the mix. We purchased a 60Gig OCZ Agility 2 SSD (solid state drive) to the unit and a couple of Gigs of DDR-2 800/667 SODimm memory to the box (purchased separately).  The unit comes with a driver CD that has both Windows and Linux drivers on it, but since the unit doesn’t have an optical drive you’ll need to copy them to a thumb drive to use them. You’ll also need to figure out how to install an operating system on the unit as well. In our case, since we were installing Windows 7, we used the Windows 7 uSB/DVD Download Tool to take an ISO file version of our Windows 7 install DVD and create a bootable thumb drive with the Win7 install DVD contents on it. Installation was easy peasy.

Hardware specs are pretty impressive given its low cost and small size:

  • Intel Atom Dual-Core 525 CPU
  • nVidia ION2 Graphics Processor
  • DVI-I and HDMI 1.3 video outputs
  • Integrated Gigabit Ethernet & 802.11 b/g/n wifi
  • 12V DC 60W power input so it can be easily run off battery or ships power
  • Microphone and Headphone connectors
  • LCD VESA mount (-BW model only)
  • Jetway handheld remote control
  • USB 2.0 ports (5) and eSata connection

As I mentioned, we’re investigating using these as replacements for some of the computing site computers. We installed Windows 7 on the system and between the dual-core Atom processor and the SSD I can’t tell any difference between performance on this system and the Core-2 Duo desktops that are already in the site. Other possible uses include as a thin client, a kiosk PC, a set-top box for large wall mounted LCD displays and as a small low-power PC aboard ship or inside buoys or other deployed equipment. The unit has both DVI and HDMI outputs, so you can easily drive a small LCD or a huge flat-panel TV as long as they have those inputs (as most do). The nVidia ION-2 graphics system will supposedly drive a full 1080p HD display. I took some pics of the units interior (below) so you can have an idea of how the systems are laid out inside and out.

MiniTop Front Interior View

Front Interior View

MiniTop Rear Interior View

Rear Interior View

MiniTop Side Interior View

Side Interior View

These aren’t the only mini-PCs on the market. There are others like the Zotac ZBox and the Dell  Zino HD and I’m sure plenty of others. They’re just the model that we’re playing with here at the college. Exciting times ahead as these units ramp up in performance and drop down in size and power draw.

Time Lapse Video on the Cheap

The video above is a time lapse of a day in the life of the UD Wind Turbine in Lewes, Delaware.

We were quite excited when they told us that the UD Wind Turbine project was a go. As the time grew near for construction to start, we wanted to chronicle the construction progress and create a time lapse video. I did some research and looked into various webcams with weatherproof housings and the like, but sticker shock at the multi-thousand dollar price tags for the equipment, as well as the networking and power hassles to connect to it made me shy away from a complicated rig. I decided that the best way to go is the simple route.

The task really screamed for a lower cost, battery powered, weather-resistant camera that could be set to take a picture every X number of minutes. I finally narrowed the search down to Wingscapes Birdcam 2.0 outdoor camera. The camera retails for about $200 but I found it on Amazon for just over $150. It has lots of advanced features like motion sensing, light sensing, has a built-in flash plus lots of other nifty features. The main selling points for me were that it was designed for outside use (the turbine was being installed in Spring and it was rainy), it stored its images on an easily accessible secure digital card (up to 4Gigs), it had a user programmable time lapse mode, and it ran on four D-cell batteries for > 4 weeks worth of endurance.

As you can see from the time lapse video that MPEO created of the construction at the turbine base, the results were just what we were looking for (except for the big pile of dirt they put in front of the camera ;?). The video from afar was created using images FTP’d from a webcam located over at the Marine Operations Building. I’ll cover the configuration and components for that webcam setup in a later posting.

I can easily imagine many other uses for this kind device. Time lapse videos of coastal erosion, tide cycles, lab experiment time series, etc. In addition to the features cited above, the camera also has a video and a USB output on the side of the unit as well as an external power connector at the bottom for more lengthy time lapses. All-in-all, highly recommended.

Birdcam Cover Closed

Cover Closed

Birdcam with the cover open

Cover Open

Birdcam Side View

Side View

I used iMovie to create the movie at the top from all of the stills for this post, but I also just as easily created one using the freely downloadable Windows Live Movie Maker if you’re running Windows.

Polar Orbiting Satellite Receiving Station

The video above is a quick screencast NASA JPL’s Eyes on the Earth application, which shows the tracks of various satellites orbiting the globe. It’s a really cool application that gives a top-notch overview of some of the satellites currently in orbit and their trajectories around the Earth. Take some time and poke around, you’ll be glad you did.

Polar Satellite RadomeThe reason I included it is that I promised to cover the polar orbiting satellite receiving station in a previous blog post about the new Satellite Receiving Station in Delaware. In the previous post I discussed the geostationary satellite receiving station. In this post, I hope to shed some light on the polar orbiting receiving setup.

What’s Inside the Radome

MODIS Satellite PassThe equipment for the polar orbiting satellite receiving station is a bit more involved than the pretty much non-moving geostationary setup. As the name implies, the polar orbiting satellites do just that, they orbit the Earth north and south, going from pole to pole. Their path is relatively simple, they just go around the earth in circles, but as they’re doing so, the Earth is rotating beneath them. The satellites point their cameras towards the earth and essentially capture a swath of data during each rotation. Since the Earth is rotating beneath them, the swath appears as a diagonal path if you look at the overlay.

Inside the RadomeIn order to capture data from a moving target, the dish has to be able to rotate and move in three axis in order to follow the satellite of interest. In order to protect the receiving equipment from the weather, it is typically installed in a circular fiberglass enclosure called a “radome”. To keep the design relatively simple, there is only one mounting configuration and radome setup created, and that’s designed to mount onboard a ship. It is then relatively simple to attach a mounting bracket to the top of a building and bolt the radome assemgly to it.

The video at the top of the page shows that there are several satellites in orbit, so the Terascan software has to pull down satellite ephemeral data from Celestrak each day, take into account the location of the tracking station, and generate a calculated schedule of which satellites will be visible to the satellite dish throughout the day. As there may be more than one satellite in view during any given time period, the satellite operator assigns a priority weighting to each satellite. The Terascan software then uses that weighting to decide which satellite it will aim the dish at and start capturing data.

Receiving Station Workstations

Acquisition and Processing SystemsInside the building is a rack of computers and receivers whose purpose in life is to control the dish on the roof of the building and to receive and process the data it relays down from the satellite. The receiving station at UD has both X and L-Band receivers which receive the data stream and pass it to a SeaSpace Satellite Acquisition Processor. The processor then sends the data packets to a Rapid Modis Processing System (RaMPS) which combines the granularized HDF data files from the satellites into a TeraScan Data File (TDF) file. Once in this format, various programs and algorithms can be run against the TDF file and channels of interest can be combined using NASA/NOAA and other user supplied algorithms to create the output product of interest. As the files can get rather large and there can be several of them coming in throughout the day, they are then moved over to a Networked Attached Storage (NAS) server and stored until they are needed.

Satellites Licensed

The UD receiving station is licensed and configured to receive data from the following satellites:

  • Aqua
  • Terra
  • NOAA 15
  • NOAA 17
  • NOAA 18
  • NOAA 19
  • MetOp-A (Europe)
  • FY-1D (China)

Hopefully this sheds a little more light on the polar orbiting receiving station and its capabilities. Let me know if there are any additions or corrections to the information I’ve posted.

My IT is Greener than Your IT (or Server Virtualization FTW)

Carbon Carbon Everywhere

Carbon footprint, carbon emissions, carbon taxes…carbon carbon carbon. That’s all we’re hearing these days. If we do something that implies that we’re using less carbon then voila! We’re suddenly “Going Green”. As a carbon-based life form, I’m quite fond of carbon personally, but the story today is about how to minimize the amount of carbon that we’re responsible for having spewed into the atmosphere and taken up by the oceans. So the thing you need to do to eliminate your carbon footprint as well as the footprint of your neighbors and their neighbors is install a 2 Megawatt Wind Turbine. Problem solved…you are absolved of your carbon sins and you may go in peace.


What’s that you say? You don’t have a 2MW wind turbine in this years budget? Then it’s on to Plan B…well Plan A in my case as I started down this road years ago. Long before we installed the turbine. Even though the end result is a much greener IT infrastructure, that plan was originally geared towards gaining more system flexibility, efficiency and capabilities in our server infrastructure.  I’d be lying if I said I started out doing it to “be green”, even though that was an outcome from the transition. (Unless of course I’m filling out a performance appraisal and it’ll give me some bonus points for saying so – in which case I ABSOLUTELY had that as my primary motivator ;?)

One of the things that we do here in the Ocean Information Center is to prototype new information systems. We specialize in creating systems that describe, monitor, catalog and provide pointers to global research projects as well as their data and data products. We research various information technologies and try to build useful systems out of them. In the event that we run into a show stopper with that technology, we sometimes have to revert to another technology that is incompatible with those in use on the server. Whether they be the operating system, the programming language, the framework or the database technologies selected. In these scenarios, it is hugely important to compartmentalize and separate the various systems that you’re using. We can’t have technology decisions for project A causing grief for project B now can we?

One way to separate the information technologies that you’re using is to install them on different servers. That way you can select a server operating system and affiliated development technologies that play well together and that fit all of the requirements of the project as well as its future operators. With a cadre of servers at your disposal, you can experiment to your hearts content without impacting the other projects that you’re working with.  So a great idea is to buy one or more servers that are dedicated to each project…which would be wonderful except servers are EXPENSIVE. The hardware itself is expensive, typically costing thousands of dollars for each server. The space that is set aside to house the servers is expensive – buildings and floor space ain’t cheap. The air conditioners that are needed to keep them from overheating is expensive (my rule of thumb is that if you can stand the temperature of the room, then the computers can). And lastly the power to run each server is expensive – both in direct costs to the business for electricity used and in the “carbon costs” that generating said electricity introduce. I was literally run out of my last lab by the heat that was being put out by the various servers. It was always in excess of 90 F in the lab, especially in the winter when there was no air conditioners running. So my only option was to set up shop in a teeny tiny room next to the lab. Something had to give.

We Don’t Need No Stinkin’ Servers (well, maybe a few)

A few years ago I did some research on various server virtualization technologies and, since we were running mostly Windows-based servers at the time, I started using Microsoft’s Virtual Server 2005. Pretty much the only other competitor at the time was VMWare’s offerings. I won’t bore you with the sales pitch of “most servers usually only tap 20% or so of the CPU cycles on the system” in all its statistical variations, but the ability to create multiple “virtual machines” or VMs on one physical server came to the rescue. I was able to create many “virtual servers” for each physical server that I had now. Of course, to do this, you had to spend a tad more for extra memory, hard drive capacity and maybe an extra processor; but the overall cost to host multiple servers for the cost of one physical box (albeit slightly amped up) were much less now. To run Virtual Server 2005, you needed to run Windows Server 2003 64-bit edition so that you could access > 4Gigs of RAM. You wanted a base amount of memory for the physical server’s operating system to use, and you needed some extra RAM to divvy up amongst however many virtual servers you had running on the box. Virtual Server was kind of cool in that you could run multiple virtual servers, each in their own Internet Explorer window. While that worked okay, a cool tool came on the scene that helped you manage multiple Virtual Server 2005 “machines” with an easier administrative interface. It was called “Virtual Machine Remote Control Client Plus”. Virtual Server 2005 served our needs just fine, but eventually a new Windows Server product line hit the streets and Windows Server 2008 was released to manufacturing (RTM) and shipping on the new servers.

Enter Hyper-V

A few months after Windows Server 2008 came out, a new server virtualization technology was introduced called “Hyper-V”. I say a “few months after” because only a Beta version of Hyper-V was included in the box when Windows Server 2008 rolled off the assembly line. A few months after it RTM’d though, you could download an installer that would plug in the RTM version of it. Hyper-V was a “Role” that you could easily add to a base Win2k8 Server install that allowed you to install virtual machines on the box. We tinkered around with installing the Hyper-V role on top of a “Server Core” (a stripped-down meat and potatoes version of Win2k8 Server) but we kept running into road blocks in what functionality and control was exposed so we opted to install the role under the “Full Install” of Win2k8. You get a minor performance hit doing so, but nothing that I think I notice. A new and improved version came out recently with Windows Server 2008 R2 that added some other bells and whistles to the mix.

The advantages of going to server virtualization were many. Since I needed fewer servers they were:

  • Less Power Used – fewer physical boxes meant lower power needs
  • Lower Cooling Requirements – fewer boxes generating heat meant lower HVAC load
  • Less Space – Floor space is expensive, fewer servers require fewer racks and thus less space
  • More Flexibility– Virtual Servers are easy to spin up and roll back to previous states via snapshots
  • Better Disaster Recovery – VMs can be easily transported offsite and brought online in case of a disaster
  • Legacy Projects Can Stay Alive – Older servers can be decommissioned and legacy servers moved to virtual servers

Most of these advantages are self-evident. I’d like to touch on a little more are the “flexibility”, “disaster recovery” and “Legacy Projects” topics which are very near and dear to my heart.


The first, flexibility, was a much needed feature. I can’t count how many times we’d be prototyping a new feature and then, when we ran into a show-stopper, would have to reset and restore the server from backup tapes. So the sequence would be back up the server, make your changes and then, if they worked, we’d move on to the next state. If they didn’t we might have to restore from backup tapes.  All of these are time-consuming and, if you run into a problem with the tape (mechanical systems are definitely failure prone), you were up the creek sans paddle. A cool feature of all modern virtualization technologies is the ability to create a “snapshot” of your virtual machine’s hard drives and cause any future changes to happen to a different linked virtual hard disk. In the event that something bad happens with the system, you simply revert back to the pre-snapshot version (there can be many) and you’re back in business. This means that there is much less risk in making changes (as long as you remember to do a snapshot just prior) – and the snapshotting process takes seconds versus the minutes to hours that a full backup would take on a non-virtualized system.

Another cool feature of snapshots is that they can be leveraged on research vessels. The thought is that you get a virtual machine just the way you want it (whether it’s a server or a workstation). Before you head out on a cruise you take a snapshot of the virtualized machine and let the crew and science parties have their way with it while they’re out. When the ship returns, you pull the data off the virtualized machines and then revert them to their pre-cruise snapshots and you’ve flushed away all of the tweaks that were made on the cruise (as well as any potential malware that was brought onboard) and you’re ready for your next cruise.

Another capability that I’m not able to avail myself of is the use Hyper-V in failover and clustering scenarios.  This is pretty much the ability to have multiple Hyper-V servers in a “cluster” where multiple servers are managed as one unit. Using Live Migration, the administrator (or even the system itself based on preset criteria) can “move” virtual machines from Hyper-V server to Hyper-V server. This would be awesome for those times when you want to bring down a physical server for maintenance or upgrades but you don’t want to have to shut down the virtual servers that it hosts. Using clustering, the virtual servers on a particular box can be shuttled over to other servers, which eliminates the impact of taking down a particular box. One of the requirements to do this is a back-end SAN (storage area network) that hosts all of the virtual hard drive files, which is way beyond my current budget. (Note: If you’d like to donate money to buy me one, I’m all for it ;?)

I also use virtualization technologies on the workstation side. Microsoft has their Virtual PC software that you can use to virtualize say an XP workstation OS on your desktop or laptop for testing and development. Or maybe you want to test your app against a 32-bit OS but your desktop or laptop is running a 64-bit OS? No worries, virtualization to the rescue. The main problem with Virtual PC is that it’s pretty much Windows-only and it doesn’t support 64-bit operating systems, so trying to virtualize a Windows 2008 R2 instance to kick the tires on it is a non-starter. Enter Sun’s…errr…Oracle’s Virtual Box to the rescue. It not only supports 32 and 64-bit guests, but it also supports Windows XP, Vista and 7 as well as multiple incarnations of Linux, DOS and even Mac OS-X (server only).

What does “support” mean? Usually it means that the host machine has special drivers that can be installed on the client computer to get the best performance under the virtualization platform of choice. These “Guest Additions” usually improve performance but they also handle things like seamless mouse and graphics integration between the host operating system and the guest virtual machine screens. Guest operating systems that are not “supported” typically end up using virtualized legacy hardware, which tends to slow down their performance. So if you want to kick the tires on a particular operating system but don’t want to pave your laptop or desktop to do so, virtualization is the way to go in many cases.

The use cases are endless, so I’ll stop there and let you think of other useful scenarios for this feature.

Disaster Recovery

Disasters are not restricted to natural catastrophes. A disaster is certainly a fire, earthquake, tornado, hurricane, etc. but it can also be as simple as a power spike that fries your physical server or a multi-hard drive failure that takes the server down. In the bad-old-days (pre VM) if your server fried, you hoped that you could find the same hardware as what was installed on the original system so that you could just restore from a backup tape and not be hassled by new hardware and its respective drivers. If you were unlucky enough to not get an exact hardware match, you could end up spending many hours or days performing surgery on the hardware drivers and registry to get things back in working order. The cool thing about virtualized hardware is that the virtual network cards, video cards, device drivers, etc. that are presented to the virtual machine running on the box were pretty much the same across the board. This means that if one of my servers goes belly up, or if I want to move my virtual machine over to another computer for any reason, there will be few if any tweaks necessary to get the VM up and running on the new physical box.

Another bonus to this out-of-the-box virtual hardware compatibility is that I can export my virtual machine and its settings to a folder, zip it up and ship it pretty much anywhere to get it back up and online. I use this feature as part of my disaster recovery plan. On a routine basis (monthly at least) I shut down the virtual machine, export the virtual machine settings and its virtual hard drives, and then zip them up and send them offsite. This way if disaster does strike, I have an offsite backup that I can bring online pretty quickly. This also means that I can prototype a virtual server for a given research project and, when my work is complete, hand off the exported VM to the host institutions IT department to spin up under their virtualized infrastructure.

Legacy Projects

I list this as a feature, but others may see this as a curse. There are always those pesky “Projects That Won’t Die”! You or somebody else set them up years ago and they are still deemed valuable and worthy of continuation. Either that or nobody wants to make the call to kill the old server – it could be part of a complex mechanism that’s computing the answer to life, the universe and everything. Shutting it down could cause unknown repercussions in the space-time continuum. The problem is that many hardware warranties only run about 3 years or so. With Moore’s Law in place, even if the physical servers themselves won’t die – they’re probably running at a crawl compared to all of their more recent counterparts. More importantly, the funding for those projects ran out YEARS ago and there just isn’t any money available to purchase new hardware or even parts to keep it going. My experience has been that those old projects, invaluable as they are, require very little CPU power or memory. Moving them over to a virtual server environment will allow you to recycle the old hardware, save power, and help reduce the support time that was needed for “old faithful”.

An easy (and free) way to wiggle away from the physical and into the virtual is via the SysInternals Disk2VHD program. Run it on the old box and in most cases it will crank out files and virtual hard disks (VHDs) that you can mount in your virtual server infrastructure relatively painlessly. I’m about to do this on my last two legacy boxes – wish me luck!


Most of my experience has been with Microsoft’s Hyper-V virtualization technology. A good starter list of virtualization solutions to consider is:

Hopefully my rambling hasn’t put you to sleep. This technology has huge potential to help save time and resources, which is why I got started with it originally. Take some time, research the offerings and make something cool with it!

© 2024 Ocean Bytes Blog

Theme by Anders NorenUp ↑