‘Broken Pipe’ and ‘Timeout’ Warning On Sophos UTM

Recently, I enabled HTTPS Decrypt and Scan for Web Filtering profiles on a Spohos UTM 9.3 at a client site. This was done in order to see all of the domains and addresses being accessed via HTTPS (which these days is almost everything). Without enabling this feature, sites like https://www.facebook.com and https://mail.google.com do not show up in Web Usage reports. Turning this on provided immediate benefit. However, it also caused a few issues. Namely some pages displaying what looks like the standard Content Block message listing ‘Broken Pipe’ or ‘Timeout’ as the error.

What This Means

When these errors are displayed, it basically means the client sent a a request to the server and the server terminated the communication without a response. This started happening for the client because all of the https traffic was now being inspected. Which means it is also getting scanned for viruses. Which some end servers didn’t appreciate very much.

The Solution

Since the sites that were causing issues were mostly financial and banking sites, these domains are trustworthy. The solution is to add them into the Transparent mode skiplist. But, it’s not as simple as adding www.megabank.com into the list. Most of these banks utilize no less than 10 different hosts when you process banking transactions. Each one of them has to be added individually. The skiplist does not allow for wildcards unfortunately. Another “gotcha” is the fact that some of the hosts can point to multiple IP addresses. One of the banks I had to add utilized 19 different hosts (all under the same main domain) and 10 of those hosts resolved to more than 2 IP addresses. Here are the steps I went through to set this up.

The best way to figure out what hosts you need to add is to review the Sites report under Logging & Reporting > Web Protection. Then click on the site. This will bring up the list of Domains for the site. You will need to make an entry for each of these so make a note of all of them.

I’m a big fan of groups, so the first thing I do when I set up anything like this is create a group for the particular service. For bypassing the proxy, I call the highest level group No Proxy

1. Navigate to Definitions & Users > Network Definitions > New Network Definition
2. In the Name field type “No Proxy”
3. Set the Type to “Network Group”

Now, I’m going to add another group for each of the entities (banks) I want to grant this permission to.

4. Click the + in the top right of the Members box. Another Add Network Definition dialog will appear
5. Enter the entity’s name in the Name field. For this example, I’ll enter “Mega Bank”
6. Set the Type to “Network Group”
7. Save. You will now be back to the “No Proxy” definition.
8. Double-click the “Mega bank” group and you will be presented with a dialog to make changes to it. We are now going to add the hosts.
9. Click the + in the top right of the Members box. Another Add Network Definition dialog will appear.
10. Enter the hostname in the Name field.
11. Set the Type to “DNS Group”. This is critical! Most major sites use some kind of load balancing/fail over that allows for a hostname to point to multiple IP addresses. If you set this to “DNS Host”, and you get redirected to a different IP than the DNS Host record has cached, the errors will come back. Setting the option to “DNS Group” makes the entry store all of the possible IPs the host can resolve to.
12. Enter the hostname again in the hostname field.

Repeat steps 9-12 for all of the hosts in your list.

12. Save the “Mega Bank” group.
13. Save the “No Proxy” group.

Now we need to add the No Proxy group to the skiplist

14. Navigate to Web Protection > Filter Options > Misc.
15. Add the group to the destination hosts/nets list under the Transparent mode skiplist.

The downside to doing this is HTTP/S communication to these hosts will no longer be scanned for policy violations or viruses. So be sure you know what you are risking by allowing connections to these hosts

 

IMAP Checker Is Born

While at work a few weeks ago, I came up against an interesting problem. I needed a way for a client’s monitoring solution to monitor email flow into the system. I wanted to go past the typical “SMTP is running, Exchange Services Are Running” kind of checks. These might not give me the whole picture. For instance, if I check that SMTP is accepting connections from the inside, it doesn’t tell me if mail is making it tothe SPAM filter and the SPAM filter is delivering messages correctly. There are also plenty of times where checking that you can connect to a service doesn’t mean it is really accepting – and then delivering – mail. Also, the SPAM filter acts as an SMTP proxy. So it will accept the message without external servers ever talking to the internal Exchange server. Plus, the SPAM filter accepts the message and queues it until the internal server can be reached. So, if you are just monitoring the delivery of the message it may look like everything is OK from the outside. No – I needed a way to actually verify the message made it to an inbox. If I got that, it would tell me that the domain registrar is OK, the DNS is OK, the MX is OK, the internet connection is OK, the SPAM filter is functioning, the mail server is up, the services are running, and the Exchange store is mounted.

Scheduling a message to be sent was no problem. A simple PHP script and cron job later, and I’m firing off messages at regular intervals. The tricky part was going to be knowing the message was really delivered to the address I sent it to. I researched for a while if there were any ways to natively talk to the mailbox through VBScript or PowerShell, but there was nothing. Plenty of ways to do it on Exchange 2007 and newer. But, alas, this was not the case in this environment. The client is running Exchange 2003, so my options were severely limited.

I was pretty surprised no one else I could find was trying to do something like what I was. I would have thought with Exchange 2003 being an 11 year old mail system, surely somebody did this before. Oddly enough, I couldn’t find anyone else. However, during my research I stumbled on a project called Mailsystem.Net. It is a collection of libraries written for mail system communication. It’s meant to be used with web applications or windows applications. So, seeing as how I couldn’t find any scriptable way to check an inbox, it looked like my choice was clear. I had to build an application to do it.

My job is to engineer and administer information systems. For the most part, I design the infrastructure, I setup  the infrastructure, I support the infrastructure and its users. I can write VBScripts all day long, and I’m very comfortable in VB. But I would never call myself a professional programmer. Nor would I ever apply for a job as a developer. But, with all that said, I’m not afraid to get my hands dirty with code in order to solve a problem I am facing. I actually really enjoy a bit of developing every once in a while. It may not be pretty – and there is most definitely going to be a better way to do it – but if I have to turn to writing an application to get what I’m after then so be it.

The IMAP Checker utility is pretty simple. It only performs one function, but it does it pretty well. Check if a message was received and report back the date and time of the message. It’s a pretty specific task. It’s basically a “yep, I got it” and that’s it. But, for monitoring a mailbox, or testing a mailbox, it’s works perfectly.

As I’m developing this app, it hit me that surely someone else could find this useful. I saw other people on the monitoring forums asking the same questions and not getting any really good answers. Always looking to pick up a new skillset, I decided it was time I released one of my projects into the wild for others to use and seewhat happens. So the IMAP Checker project was born.

Ironically, a little later as I was setting up the checking inside of the monitoring system, I think I found a way to do it from within the solution itself. But then I thought, I’m just going to keep going with this. This checks an IMAP mailbox. So it really doesn’t matter what mail system you use it with or what monitoring system you incorporate it into. You could use it against any version of Exchange, any Linux based mail system, GMail, or any other ISP that offers an IMAP connection.

Now for the good stuff! Being a good interwebs citizen, I decided to release the utility open source. The project itself can be found on GitHub (https://github.com/DavidWGilmore/imap-checker). You can download the source code or the current release from here. It’s a pretty well self contained application. All you have to do is unzip the release file, make some settings in the config file and you’re off! It runs on any Windows platform that supports .Net 4.5. I am contemplating moving it to an older Framework so people running Server 2003 and Windows XP can use it, but only if there is enough interest.

If you use it, I encourage your to please drop me some feedback. This is my first publicly released application, and first source code release for public viewing. If you think I need to organize the source code more, let me know. Is my project organization confusing? Tell me about it! I really care about the quality of my work, but if no one else uses it and tells me about it, I won’t be able to fix anything to make it better.

Hope you get a chance to check it out. Happy email monitoring!

-D

How I Made My Own Private Google For $10: The Server

In the first two posts, I gave you the why and the how we plan to make our own private Google-like ecosystem. Now, lets start talking about the actual hosting of the server. Still not ready to install anything yet. But almost!

The Operating System

The operating system, for me, is Debian. Once we have everything setup, we won’t have to be installing that many new items. Debian is rock solid as updates are tested over and over and over before they are made available in the package repositories. If you need to be on the latest and greatest, Debian is not for you. But it has everything you will need for what we are going to do and it’s track record of security and stability are second to none.So these articles will revolve around a Debian install. However, there are hundreds of viable Linux choices you can apply the same ideas to. Another great choice is CentOS.

For many people, Windows is where they live. And that’s fine. A lot of the packages we will be installing also have Windows counterparts. But, often times, they are behind in development versus the Linux packages, so all of the features may not be available. Plus, we are assuming you want to spend as little as possible to get this setup done. Windows Server licenses cost upwards of $500. Linux is free.

The Server: On-Site Or Hosted

The next question we have to answer is where do we want the server to live? There are two basic options we have to chose from. The first is hosting it on something we have complete control over, most likely at our home, or at work or at a data center if we are so lucky. The other choice is “in the cloud”. This would mean we pay someone to run a virtual Linux server for us in their data center. Examples of this are Amazon, Rackspace, Linode, etc. Let’s look at the pros and cons of each.

On-Site

If you have to have control of everything, this is the way to go for you. The easiest thing to do would be to load Oracle VirtualBox onto a workstation, and configure a VM to run the server. The server we will be working with takes very little resources, so it may not be a big deal for you. Of course, if you have spare hardware laying around you can use that, too. Doing this means you can control how much hardware your server gets, you can encrypt and secure it to Ultra Paranoid Mode, and you don’t have to worry about hosting providers being able to get to your stuff.

As for the cons, the main issue may in fact be resources. Even though the server takes very little RAM or CPU, it still may be too much for you to run it on your main workstation and keep it on all the time. If you want to play a game count on having to shut the VM down. You also need to consider the bandwidth requirement. To host your own server what you will need is upload bandwidth. Hosting a simple website, or hosting a 1-5 user mail system doesn’t take much. But if you want to start transferring files and media, this may become a problem. Also, your ISP may completely block certain incoming our outgoing ports like inbound web or inbound/outbound mail, so some services you host might not work.

Hosted

Running a server in a hosted environment (called a Virtual Private Server or VPS for short) can get you a ton of benefits, too. Chiefly, your server is not sharing resources with possibly your main workstation/laptop at home. Plus, it’s a lot faster to get started, as the OS comes setup with the basics from the moment it is provisioned. A lot of providers (like Amazon) even have turnkey solutions that you add to your service and all the packages and configuration are done as soon as the server is powered up.

You have to pay for a VPS (which is where my $10 for this project comes into play). Some services charge you based on your usage (based on CPU usage and bandwidth usage per hour/month/whatever). Some charge you a flat rate every month for a set configuration. What you are paying for is a much better infrastructure in terms of reliability and throughput than you can probably dedicate to the server if you host it at your home. Their underlying servers are usually better, their internet connections are usually better in terms of being closer to the backbone of the internet, having failover, having redundancy in place, etc. Mine I paid $10 for a year. A year! Granted, it’s a very small setup (256MB RAM, Single core processor, 50GB storage and I forget how much bandwidth right at the moment), but it works perfectly for me so far. I have all of the services we are going to cover running on it, haven’t done any memory optimization yet, and I’m sitting at an average of 192MB RAM used. CPU barely blips. Using less than 10GB of storage right now. So this is a perfect place for me to play. Granted, if I needed more storage or memory, I’m looking at paying a monthly fee. But this setup fits me perfect right now. If you want to find a good, cheap VPS I recommend a site called LowEndBox.com. You’ll find lots of VPS providers listing their services there, and the site puts out a report every quarter talking about the best reviewed VPS hosts based on user surveys. Plus, the users in the forums have no problem telling you who is and who is not a good host.

Running your server as a VPS is not without issues, however. Mostly what you give up is control. Someone else is still hosting your data. It’s a little different than the Google issue, however, because you have a lot more control over the server the data is on. You can encrypt folders and files. But, if the VPS has root access to your server (which most do) then the encryption of live data is useless. Plus, if someone else on the physical server you are hosted on is up to no good, and the MIBs come knocking, they don’t care who else is on the box as long as they get their guy. You would end up as collateral damage.

You also do not have complete control of the server. For instance, you can really only install the OSes they offer – typically Debian, Ubuntu, CentOS and Gentoo. So, if you want to install something like a Zentyal server (a pre-configured, drop in Small Business Server) you are out of luck. Also, you may be restricted to the changes you can make to the kernel. This mostly depends on the underlying virtualization platform. If your provider is using OpenVZ you are sharing the kernel with other servers, so what you can change may be limited.

Another thing I noticed once I got my box fired up was all of the attacks that were coming against it already. I’ve setup tons of servers at my home, and hosted mail and websites there, and got almost no malicious attacks. But, within 6 hours of getting my mail server up and running on my VPS, I saw people trying to relay through it already.

I did a lot of research before I chose my VPS host and have heard a lot of horror stories – mostly in the customer service arena. Things like “I paid for my server 3 days ago and it still isn’t provisioned.” Or “I can’t get a hold of anyone at tech support”. A lot of the smaller players seem to come and go, so do your research if you are going low end.

Hosting a server on your own stuff, or hosting it with a VPS are both completely viable options. What it boils down to is how paranoid you are, how much resources you have to give to a VM, and how much money you want to spend.

Good luck!

-D

How I Made My Own Private Google For $10: The Plan

In part 1 of this series, I ranted a bit on why I want to build my own private Google-like ecosystem. Whenever I get one of these ideas, it’s really easy to just take off, start installing software and getting my hands dirty. But, experience has taught me that it will pay off in the future to sit back and plan it out before I even hit download on the first package. So let’s start by identifying what it is we love to use with Google.

Disclaimer: I realize that there is no way I can personally live up to what Google has done. My servers will never be as fast, or have as much storage, or be as “disaster friendly” as theirs. Nor will I ever truely be able to have as many services as them. But I’m not looking to host stuff for 100 million users. I just want a place to put the stuff I use, as well as a few of my friends and family members. The load we require is pretty miniscule and completely doable
OK, so what does Google do for me? Well, it has email of course. And no matter where I am, or what I connect with, my inbox is there, with my unread marks, my folders, my sent history – it’s all there and the same no matter how I access it. Same for my contacts and calendar.Delete it on my phone and it’s gone in my webmail and on my tablet.
Another service I use is the Google Drive for being able to work on documents and spreadsheets no matter where I am. I don’t really share my files with anyone else, but I know that I could if I wanted to. Personally, I also don’t do a TON of documents or spreadsheets. But there are a few I access all the time and update, so I need that.
Securely accessing information is something we will want to do. One great way to do this is across a VPN. We also want our email and web traffic to be encrypted. While you can do this with self-signed certificates for each service, there is a better, more well designed way to do that and it’s with your own internal Certificate Authority. When we get to that I’ll discuss the pros and cons of using your own CA as opposed to buying certificates from places like Verisign or Godaddy.
Another thing a VPN gets us is a secure internet connection when we are in public areas. If we connect to a VPN when we are in Starbucks, and then route all of our traffic (email, web traffic, Facebook, etc) across that, then the Bad Guys hooked up on the same WiFi can’t get to our stuff.
Finally, we will want to tie all of this together with a single unifying ID. When you connect to any Google service, it’s all tied to your Gmail account. We want all of the things we do to be connected in this way, too. All of he products we will load have he ability to have their own, separate user accounts, but they can also connect to a user directory. It will make admining all of these services way easier
There are 3 other services that we may use with Google that are a little more complicated to pull off. Not because of the setup required, but the resources required. Primarily bandwidth, RAM and processor. The first of these is chat (IM and voice). The second is media streaming. I have a massive music library that Google will host for me and allow me to listen to on any of my connected devices. Obviously, if I’m on the same LAN as the computer that holds my music, there are tons of ways to do this. But what if I’m away from the house? The biggest hurdle here may be bandwidth, but we will try to attack this one too. And the final thing is social media. It is possible to host our own social networks, and therefor control who we share our information with. But, for some people, we may not know who all we want to connect with, and we rely on them finding us in order to get connected. This presents a bit of a hurdle. So, this one may not be worthwhile to explore, but I might just “because I can”.
To recap, our plan is:
  1. A secure mail platform that will allow us to keep everything, including our contacts and calendars, in sync across multiple platforms
  2. A secure way for me to access and edit my files from anywhere
  3. A VPN to keep our activity private when we are in public areas.
  4. A chat server
  5. Media streaming

Over time, I may expand the scope of what we are after, but for now, I think that will get us focused on the correct course of action. In the next post, we’ll start with getting our base server setup. Hope you all are ready to get our hands dirty now!

Shine On

-D

How I Made My Own Private Google for $10 Part 1: The Why

Let me start off by giving Google credit where it’s due. Their ecosystem is awesome. The way they have all their different services and features tied together makes life in the Google cloud pretty easy and seamless. As an Android user, whenever I get a new phone, I put in my Gmail account and have all my contacts, calendars, WiFi connections, mail, documents I store, apps previously installed and much more. Anyone who has ever had to reinstall their operating system or who has bought a new computer knows how much work it is to get all your programs reinstalled, all your documents back, your email setup, your WiFi connections configured, etc. Google does all that for us.

So Why Reinvent The Wheel?

Once you look back at that impressive list of data you access from your Gmail account, you may suddenly realize something. Damn! Google knows a LOT about me! Now, I don’t sit here and think that someone at Google is reading the love letter I wrote to my wife and stored in my Google Drive. And I don’t think that someone at Google is reading my email about my latest Amazon purchases. And I don’t think someone at Google is getting my WiFi passwords so they can break into my network. But Google does build a profile on you. That’s a fact. They even tell you this. They tell you they scan your email so they can deliver more targeted advertising to you. They will scan your messages, see your receipt from 1-800-Flowers for the anniversary gift for your wife, and now all of a sudden you will start seeing ads around this time next year for florists in your area. Because they keep all the WiFi networks you have connected to stored in your account, they know where you have been. And, they can match that up with other people who have connected to the same networks and start to pull together how people are connected to one another. One of the scariest things is the Google Glass product. If you put one of those on and walk around they will literally be looking through your eyes and recording everything you are seeing in one way or another.

This is from a CNBC article discussing if Google can still use the slogan “Don’t Be Evil”

Collecting information about its users is key to Google’s business model and in 2010, Eric Schmidt, the company’s executive chairman, gave a pretty clear picture of how much the company actually knows about its users.

“With your permission, you give us more information about you, about your friends, and we can improve the quality of our searches. We don’t need you to type at all. We know where you are. We know where you’ve been. We can more or less know what you’re thinking about,” he said in an interview at the Washington Ideas Forum.

Schmidt also said in the same interview that the company’s policy is “to get right up to the creepy line and not cross it.”

All that being said, Google isn’t the scariest entity. Its the world governments.The governments will go to the various technology companies like Google, Microsoft, Facebook, Verizon, AT&T, etc and say they need information on a certain person. Usually these companies deny the requests more often than they comply. But sometimes, through threat of legal action and hefty fines, they have no choice and have to comply. And they can’t tell you about it. They will come out a couple times a year with what are called Transparency Reports where they tell us that the government asked for information x amount of times and they had to comply y amount of times. The problem is these reports usually come out 4-6 months after the fact, and by then, it’s already too late. And they never release who the requests were for. So you never know.

Around this time last year, news broke from part of the (in)famous Edward Snowden leaks that the NSA was tapping connections between Yahoo’s and Google’s datacenters and was able to siphon off immeasurable amounts of data to their data warehouses for review later. This was possible because Google and Yahoo, at the time, were not encrypting communications between their servers internally. Google said they had no idea this was going on and were stunned to find out.

But I’m not a criminal. I’m not a terrorist. Why do I care?

More than likely, that’s completely accurate. But in order to defend our country, the government relies on intelligence gathering practices. It brings “guilty by association” to a whole new level. Lets look at a couple ways innocent people can suddenly show up on the radar of the intelligence gathering community.

Lets say Bob is a know terrorist. He is obviously being watched by the intelligence community. And lets say Bob has a brother named Krusty. Being a family member, he is also being monitored. Now, lets say Krusty went to a local tavern, and he strikes up a conversation with you. You’ve never met the guy before, but he seems pretty cool (probably because you’ve had too many Duffs), so you and he become friends on Facebook. The government sees that you are connected to a party of interest, so now you are on their radar and every email, every text message, every swipe of your credit card is being reviewed for “useful information”. You happen to buy a couple bags of fertilizer at Home Depot, using your credit card. Krusty happened to buy a couple canisters of gasoline because being a clown isn’t enough to pay the bills, and he needs fuel for his lawn mowing side job. All of a sudden, you are arrested because the two of you were obviously planning on helping Bob carry out his bombing of city hall he was threatening the decent citizens of Springfield with.

I know it sounds a little far fetched, but it really isn’t far from the truth. As another example, lets say you and Bob go to the same bar at the same time everyday. You don’t know each other, and you never talk. But because Google knows what WiFi networks you and Bob connect to, and the NSA has siphoned this from Google without their permissions, all of a sudden you are an associate of Bob’s and a person of interest because the two of you are together at the same place at the same time regularly..

OK. So maybe that does sound a bit too conspiracy theorist, “tin foil hat” for you. And that’s fine. But look at it this way; Why do they need information on you in the first place? We have a right to privacy. When you send out your check to pay your utility bill, you seal it in an opaque envelope so no one can see the contents. You don’t tape it to a post card for every mail carrier to see. Hell, you seal up the birthday card you sent to your Aunt. You care about privacy. Why is this any different? And the entire shoulder of blame is not just the big tech companies. We as a culture willingly give them this information. We tell them everywhere we go, everything we eat, how we feel about certain products or current topics. We are helping them.

Oh come on! Tell The Truth. Is paranoia the only reason?

Of course not 🙂 Ask anyone involved with technology, or lots of other creative processes “why are you doing this” and the answer is almost always “because I can” or “because I wanted to try”. And that reason is just as big for me as the rant I just gave above. As a systems engineer, I look at the various server and software packages much the same way a painter sees his palette of colors. Individually, they don’t do much. But combine them together in interesting ways and now you’ve got something! So why not take charge of my own data? I won’t lie – this is going to be a fairly complex setup. Maybe overkill for some things. But, as a wise little green man once said “Do or do not. There is no try“. In other words, anything worth doing is worth doing right. I’ve started building this out already, and it’s been a lot of work. A lot of trial and error. More wasted time than not probably because of the mistakes. And I don’t expect people like my parents, or my boss to go to these kind of measures. But that’s what IT friends are for :). And, as is the way of open source, eventually some of these concepts will start becoming turnkey solutions and get combined into one “private Google” product or series of scripts you can use to set it up yourself.

OK, enough of the ranting and the reasoning. In part 2 I will lay out The Plan so we can see what all we will be getting ourselves into.

Shine On,

-D

Use RVTools To Find Your VMWare Snapshots

This weekend, we had some issues with our Veeam Backup and Replication jobs. While on the phone with Veeam support I discovered that the VM in question was running off of a couple Consolidate Helper snapshots that had not been consolidated back into the VM. This, of course, is a terrible practice, and the longer you wait to delete and consolidate them, the bigger they get and the longer they take to delete. Luckily I was able to delete all of my snapshots and be back on the flat file without any issues.

Now I was faced with a sudden realization. How many of my other VMs were running this way? We are a small place with a relatively small virtual footprint compared to most. But it seemed rather inefficient to me to have to go look at the Snapshot Manager on each VM one at a time. There had to be a better way.

I did some research and found out there was no built in function within the vCenter client to show you all of the snapshots across your infrastructure. I found lots of scripts I could use, and I love me a good script, but there had to be a better way to do this.

Enter RVTools. Its a free app that runs on Windows using the VI SDK to talk to your VMWare infrastructure. The amount of info displayed by this little tool is astounding! It has 23 different tabs full of data on your VMWare setup. The particular tab I was interested in was the vSnapshot tab (see below). I sorted the Snapshots by name and found all the Consolidate Helper Snapshots that got left behind by Veeam at some point. Now I can go back and fix these. Going through my whole cluster and manually checking the Snapshot Manager on each VM would have taken me 15-20 minutes. This takes me 2.

RVTools

 

Hello world!

Ah yes, the obligatory ‘Hello world!’ statement. It’s the first thing you learn to do in any new language – make the screen say ‘Hello world’. And, it just so happens to be the first/sample post when you setup many CMS platforms. So, what better way to begin this site than with a ‘Hello world!’ post.

A while ago, I started a blog called Sonic Jamming Signals. It focuses on one of my main passions in life – music. I started it up, and once it got rolling, I realized something. I need to blog about a lot more topics than just music! If you’ve ever met me in person or online you know I never shut up and I have an opinion on anything. The biggest thing I realized is I needed a place to write up the things I discover in my day to day activities as an IT professional. So, rather than change the existing blog I had, I decided to start a new one – which is what we have here. There will be a lot more than just IT, but it will probably be the vast majority of content. I won’t go into complete detail in this post on what all I’m in to, but if you want to read more about me and what you can expect you can do so here.

Another thing I wanted was a place to host online services for my family. I used to have a personalized domain name, but who else besides me would ever want an email address @davidwgilmore.net? My son is getting to that age where he will need an address, and a place to store his stuff. And I always wanted to have a domain name that was family oriented. So I came up with Gilmoreipedia.org. And yes, it is a play on Wikipedia. I intend this to be a storehouse of information and all things Gilmore, so it made sense in a snarky way to use that name.

One thing that will be come apparent in my first series of posts is just how big my idea is for this domain name. This brand. This thing that is All Things Gilmore. Basically, I had a bunch of ideas, and needed a playground to try them out with and this space is it.

Hope you enjoy reading the things I post. Love to hear your comments – positive or otherwise.

-D