Thursday, April 13, 2017

When to Bring Software Development In-house

A friend taking an HR course contacted me, this evening, to ask me for my input into her homework assignment. The scenario was that you, the student, work in the HR department at a 75 person company that delivers meals to people. The first version of the mobile app, used for scheduling deliveries, has been outsourced and the CEO believes it's taking too much time and money to be developed. The CEO is considering hiring software and QA engineers to bring development in-house and wants your input.

This a good scenario for the real world and I was happy to share my thoughts. My first question was asking if the company considers itself a technology company. Corporations like Apple and Amazon are clearly high tech companies, so it's a no brainer for them to develop their own software. On the other end of the spectrum would be companies who use custom IT systems, but are not tech companies. For example, Wyndham, where I worked about five years ago, outsourced development of their e-commerce websites and backend reservation systems. Asking a company to determine how they self-identify is a good first step.

Another thing to consider is how often will the software be updated. Modern, high tech companies release new versions of their software weekly or monthly. Facebook is a perfect example of a 21st century company that treats development as an ongoing process, instead of an event, by releasing new software three times each day.

These considerations are simply a starting point for the discussion. If the decision is made to bring development in-house then there are questions about conducting interviews, how to dress, and hearing about candidates real-world experiences.

Tuesday, April 4, 2017

DNS Hijacking?

I have a DNS hijacking theory.

Route 53 is Amazon's elegant DNS web service. DNS is the part of the Internet that converts domain names, like, into IP addresses such as This is how humans contact computers on the Internet. While DNS is robust, resilient, and redundant, it is the Internet's single point of failure.

So, here's my theory. Websites, like, use Route 53:

dig ns

returns: 172800 IN NS 172800 IN NS 172800 IN NS 172800 IN NS

This means the first time you visit, your web browser/ISP will ask one of the Internet's root servers for's registrar (i.e., where is the domain name,, registered). The root servers will tell your web browsers that is hosted at (Moniker is a domain name registrar, similar the well-known GoDaddy). The next step is that your web browser will ask Moniker where's DNS servers are located. These are referred to as the DNS name servers, or NS for short. As seen above, the response will point your browser to Route 53 which answers with four different servers for redundancy. The final step is that your web browser will query any one of these four servers for the physical IP address of All of these steps happen in the blink of an eye.

Now here's the hijacking part. What if I go to my own Route 53 account, create an entry for, and start adding records? When I did this, Route 53 assigned the following four NS servers to me:

There should be no hijacking problem since Route 53 assigned four NS servers to me that are different than's NS servers. In other words, I cannot hijack's Internet traffic in this case. But what if Route 53 had assigned to me an NS server that was the same as's NS server? Then, I'm speculating, I could redirect at least a small portion of's traffic to wherever I wanted to.

Perhaps this isn't an issue because Route 53 ensures that it never duplicates NS servers names. That would be an expensive proposition, but certainly doable. From there, if my theory holds true, then what about simpler DNS hosts, such as GoDaddy whose DNS servers seem to be limited to, where XX appears to be a double digit number? This means that many different domain names are using the same DNS server names. Would that make it possible to hijack some traffic from websites sharing the same DNS server? I'm sure that DNS implementations are robust enough that this isn't an issue, otherwise it would have occurred by now. But, with my understanding of the DNS RFC, I don't know how this hijacking issue has been avoided.

So, how has this DNS hijacking scenario been prevented? I'd love to know.