Showing posts with label coding. Show all posts
Showing posts with label coding. Show all posts

Thursday, December 8, 2022

Depth-first Recursion with AI in 103 Seconds

Once, when I was interviewing with the iTunes U team at Apple, I was asked to write a depth-first search, using recursion, in a language of my choice. I chose Java and proceeded to sketch out a tree on the whiteboard while keeping track of my stack on the side of the board.

It took about 10 or 15 minutes. Then the hiring manager and I walked though the code and I was thrilled that I passed, especially since recursion is not my strongest area of coding.

Today, I used ChatGPT which was released eight days ago. It came up with three different solutions in less than two minutes. This is fascinating.


Tuesday, February 1, 2022

Refactoring vs Porting vs Optimizing

I've heard people use these terms, interchangeably. Since good definitions make for clear ideas I wanted to explicitly define them.

Refactoring: Restructuring computer code (factors), without changing its external behavior (functionality), to make it more readable, or change its design, reduce complexity, etc.

Porting: Changing code so it’ll run in a different execution environment (language, operating system, CPU, etc) than originally designed for. This makes the code more "portable."

Optimizing: Modifying code to be more efficient without changing its functionality so it runs faster or uses less memory, etc.

Frequently, code changes focusing on one of these areas will have other benefits. For example, when porting code, it might also be optimized and refactored. 

Monday, April 8, 2019

Timer Objects for Network Latency

The heart of the Timer class.
I left out a simple tip from my "Tricks I Learned At Apple: Steve Jobs Load Testing" piece about timer objects. Below, is a complete, yet simple, Timer object class I wrote shortly after leaving Apple when I was working with SMS Hayes AT commands and RESTful APIs.


Exponential Notification

Timer objects do nothing more than measure the time it takes for a server's request/response loop to complete. Since this type of call is made over a network, it might finish very quickly (as expected) or, if the network is down or congested, it could take along time. If it takes a long time, the system admins will want to know. A good notification method is not to send an e-mail update or text message every single minute, or so – that ends up flooding people's inboxes. Instead, an exponential notification would be a much better idea. For example, notify the system administrators immediately, then wait one minute before the next notification, then wait two minutes, four minutes, eight minutes, etc. Finally, send a last notification once the issue's fixed.

Initiating the timer is simple...

Timer timer = Timer.startNewTimer();
NSLog.debug.appendln("Start time = " + timer.startTime());
Response response = saleTransaction.submitTransaction();
timer.stop();
NSLog.debug.appendln("Stop time = " + timer.stopTime());


And, lastly, the complete Java timer class is anticlimactic.


package com.woextras;

import com.webobjects.foundation.NSTimestamp;

public class Timer
{
private NSTimestamp _startTime = null;
private NSTimestamp _stopTime = null;

public static Timer startNewTimer()
{
Timer timer = new Timer();
timer.start();
return timer;
}
public void start()
{
_startTime = new NSTimestamp();
}

public void stop()
{
_stopTime = new NSTimestamp();
}
public NSTimestamp startTime()
{
return _startTime;
}
public NSTimestamp stopTime()
{
return _stopTime;
}
public Long elapsedTime()
{
long completionTime = -1;
if (_startTime != null)
{
long startTime = _startTime.getTime();
long stopTime;
if (_stopTime != null)
{
stopTime = _stopTime.getTime();
} else
{
stopTime = new NSTimestamp().getTime();
}
completionTime = (stopTime - startTime) / 1000L;
}
return completionTime;
}
}

Saturday, December 29, 2018

Java and JavaScript Objects

Dave Winer posted a lesson-learned tip about JavaScript. Although Java and JavaScript are unrelated languages, they have many similarities.

var d1 = new Date ("March 12, 1994");
var d2 = new Date ("March 12, 1994");
alert (d1 == d2); // false

It seems that JavaScript, like Java, is actually comparing the two Date objects, d1 and d2, to see if they're the same object in memory, not the same value. Since these instance variables are not referencing the same object the alert line of code returns false.

Although, at first blush, this seems unintuitive, it actually allows greater flexibility when making comparisons. If you don't want to compare the two objects, but rather the value of the two objects, then you can simply send the Date object the getTime() message which returns true.

var d1 = new Date ("March 12, 1994");
var d2 = new Date ("March 12, 1994");
alert (d1.getTime() == d2.getTime()); // true

And, finally, to prove my theory to myself...
var d1 = new Date ("March 12, 1994");
var d2 = d1;
alert (d1 == d2);  // true

Thursday, April 26, 2018

Apple's Language Holy War

In the late 1990s, we used to joke about language holy wars at Apple. Apple had purchased NeXT, in December 1996, for WebObjects and the NeXTSTEP operating system (which became Mac OS X and was recently rebranded as macOS). Since NeXTSTEP's release, in the late 1980s, the OS was built on Objective-C (a superset of ANSI C that was object oriented [OO]). In the mid-1990s, Java came along, from Sun Microsystem, and it quickly became the first, mainstream, OO language leading to a holy war between Objective–C and Java.

WebObjects was originally written in Objective-C, but, by WebObjects version 3.5, in 1997, it was fully bridged with Java using the cheekily named JOBS (Java to Objective-C Bridging Specification). A WebObjects developer could write code in Java and most every Java object had a corresponding Objective-C object wrapped and running in the background.


Java vs. Objective-C

Around WWDC 2000 or 2001, Apple settled the holy war by stating that Objective-C would be used on the client (Cocoa desktop development) and Java would be used on the server (WebObjects server development). But we'd still argue about the pros and cons of the two languages.

The strength of Java was that it was a strongly typed language. The strength of Objective-C was that it was a weakly typed language. So, the pros and cons were subjective. It really depended on your needs. Objective-C would let a developer "touch the metal" meaning a developer could write code to interact with a computer's low level memory. This is very powerful, but it requires a lot of responsibility on the software developer's part since they'd have to manually manage their program's memory usage by using pointer arithmetic. Pointer arithmetic allows a developer to directly touch values in the memory of a computer. If the developer makes a miscalculation, such as terminating a string incorrectly, it could cause the program to crash.

The selling point of Java, which was very similar to Objective-C, is that it didn't use memory pointers. Instead, Java code ran inside a virtual machine that acted like a sandbox between the executable code and the operating system. Since Java couldn't directly touch computer memory, it used references instead of pointers. The big joke in Java was, if you tried to call a method on an instance variable that was null, you'd throw a NullPointerException, which was a poor choice for an exception class name since Java didn't have pointers. That message class should have been NullReferenceException. An excellent solution for avoiding these bugs, called optionals (option types), has been implemented in Apple's Swift programming language, three years ago. But I digress.

Since Java ran inside a virtual machine, it was a little more complicated to talk directly to the OS. For example, if you needed to access the computer's file system then you probably shouldn't hard code something like "c:/ProgramFiles/tmp" since that wouldn't work if your Java code ran on a Mac (c:/ is the path to the main hard drive on a Windows computer, whereas macOS doesn't care about the physically drive, but, rather, the medium being accessed with a path like "/Volumes/Macintosh HD/Users/jmoreno/tmp."

Since the path to a file or folder (directory) on each OS was different, it required the software developer to use global variables that the Java virtual machine populated when it started up (on Windows, it's "c:/" and on macOS it's "/Volumes/Macintosh HD."

Not hard coding OS paths requires a bit of discipline, but it keeps the software developer honest and prepared if their code needs to run on a different OS than was originally intended. This type of discipline was key, in 2005, when Apple switched the Macintosh CPU from IBM's PowerPC chip to Intel's CPU. Without the public realizing it, Steve Jobs announced that Apple had been secretly developing Mac OS X for both CPUs and the time had come for Apple to switch to the same CPU that Windows ran on. This had the side effect of allowing Windows to run natively on a Mac using Apple's Boot Camp utility software.


Code Reviews

I have written a lot of sloppy code, in my time, and I discovered that group code reviews, weekly or biweekly, were a great help. This was the place where we could show off our code to the rest of our team; and the rest of the team could question anyone on the code they wrote. Most teams usually don't go out of their way to review someone else's code if it works as expected. Typically, it's not until a particular software developer has left a team when someone else has to read and review the departed team member's code. This can raise a lot of questions as to what the original purpose of the code was.

Good coding practices and discipline will pay dividends years down the road, so take the time to do it right. If not now, then when?

Monday, August 28, 2017

My Favorite Technical Hacks

Hacks are simple shortcuts that increase productivity. They can be inelegant, but they solve a problem quickly. Hacks may be brittle, solving a problem under specific conditions, or they can require a few extra steps to realize the effective solution. An ideal hack is an innovation or design pattern that works so well it becomes a feature. My favorite non-technical hack (life hack) is one I use after hunting for a parking spot when they're scarce. In the past, I've parked my car and rushed off to my destination without paying attention to where I parked. This has happened to me a couple time in La Jolla and Pacific Beach. So, one life hack I use to remember where I've parked is to open up the Maps app on my iPhone and take a screen shot as soon as I am parked.

I've come across a few computer hacks that stick out in my mind. Computer hacks are harder to explain than life hacks because one has to have an interest in software engineering. But, here goes...


1. Preventing JPG "Theft"

Java was the hot new language when I first started working at Apple. In order to get up to speed I coded as much Java as I could, especially applets since that seemed to be the future of the Web. (It turns out that Java became everything Ada wanted to be, and JavaScript became everything that Java applets wanted to be.)

One area that I focused on was coming up with a way to prevent JPG images from being "stolen" from a webpage. Once an image is displayed on a screen (computer, smartphone, etc), there's no simple way to keep someone from taking a screen shot. My Java applet solution got around this by taking the thumbnail of the image and blowing it up to the full size image so that it was pixelated (blurry). Then, as the user moused over the image in the applet, a small portion would come into focus. This allowed the user to see the entire image at full resolution, but only in parts (one-sixteenth, to be exact). While it was possible to take 16 screen shots and piece them together into a single, full resolution image, that was far from practical.

The full size image was encrypted on the web server to keep a user from downloading it directly from the server. The encrypted image was then sent to the user's web browser and decrypted in pieces, while in memory, inside the client's applet as they moused over the image. 


2. Facebook encrypted UDP

Awhile back, I heard about a brilliantly simple trick that Facebook uses to speed up their site. When a Facebook user logs into their account, their data is fetched from a database. While fetching the data, the user has to wait. The amount of wait time could be imperceptible to the user, or it could be a noticeably long time if the website is under a heavy load. "Heavy load" is a relative term, but Facebook serves more than two billion active users per month, so saving any amount of time makes a noticeable difference at that scale.

Wouldn't it be great if Facebook's servers knew what data a user needed before the user formally requested it? Well, that's effectively what Facebook's done with their little trick that simply involves sending an encrypted UDP (datagram) ahead of the formal TCP/IP request. UDP requests are fire-and-forget, meaning there's a small chance they might not arrive at their destination, but, if they do arrive (and they usually do) then they'll reach Facebook's servers sooner than a TCP/IP request. There's more overhead with TCP/IP since it guarantees delivery (or notice of a failed delivery). TCP/IP is the reason that webpages render perfectly compared to the BBS's of the 1980s that used unreliable dial-up modems where static and interference would be misinterpreted as data and displayed as garbled text.

So, the UDP datagram arrives ahead of the TCP/IP request which enables Facebook's servers to pre-fetch the data and load it in its cache before the formal TCP/IP request arrives. This hack is a simple, yet elegant, way to optimize a website for speed simply by "priming the pump."

3. Safari DNS and Pre-loading

DNS: Safari speeds up webpage load times by looking at all the host names on a webpage, once it's loaded, and then performing a DNS lookup. This saves time, later, when a user clicks on a link since the DNS lookup has already been completed. The time savings might go unnoticed or it can save a few seconds. (There's even an HTML tag to help DNS prefetching.)

Pre-loading: Although I haven't read about this hack, I noticed it when I was monitoring my web server's logs in real-time. As I started typing my own domain name in Safari it came up with an autocomplete suggestion before I finished typing. At the exact moment that the autocomplete suggestion came up in Safari, I noticed an http request for that autocomplete suggestion hitting my web server and showing up in my web server logs. In other words, Safari was loading a webpage before I hit enter. There's not much harm in doing this even if I never formally requested that URL. This is why some webpages load in a flash, especially when I'm on a fast Internet connection and the web server is using a content deliver network (CDN) like Akamai or CloudFront.


4. Keeping iOS Apps Running in the Background

For nearly four years I lead the San Diego Kickstarter Meetup where I mentored entrepreneurs on crowdfunding their products. (At one point, we had six live crowdfunding campaigns.) A couple of the entrepreneurs had iOS apps that accompanied their product which needed to continue running in the background; but iOS doesn't like to keep an app running in the background because it drains the battery. One of the most interesting hacks to keep an app running in the background was to simply play a silent MP3 file which kept the app "alive," even when it was in the background. The downside was that you couldn't play music from another app, but for some situations that was fine. 


5. Timestamping Race Photos

In the late 1990s, I started going to races (5Ks, 10Ks, marathons, etc), snapping race photos at the finish line, and then selling them either at the race or online. The challenge was finding a runner's photo among thousands – bib numbers had to be entered manually, which would take many hours. I came up with a solution that worked great which, to my surprise, no other race photography had implemented. (Nowadays, RFID chips attached to the racer's running shoes solves this problem.)

My solution was to simply synchronize the time on my digital camera to the race clock where midnight (00:00 on a 24 hour clock) was the start of the race. If a runner finished a race in 23 minutes and 30 seconds then they could simply start looking for their race photo around 00:23:15 (23 minutes and 15 seconds after midnight) since the photos were taken about ten seconds before the runner crossed the finish line.

Not all hacking is bad. 🖥

Tuesday, August 18, 2015

Random Thoughts on Randomness



Here's a random thought on randomness...

In a typical state lottery, like California's Powerball, a player chooses five or six combinations of numbers between 1 and 59.

So, how likely is a lottery's winning set of numbers to be 1, 2, 3, 4, 5 or 1, 2, 3, 4, 5, 6?

Surprisingly, it's no more or less likely than California's most recent Power Ball winning numbers: 3, 13, 17, 42, 52, 24. Random numbers are random numbers. While 1, 2, 3, etc doesn't seem random, it's no different than any other combination with non-patterns. Don't forget, since we're dealing with pure numbers there's solid mathematics behind it.

Thursday, September 18, 2014

Pulitzer Prize for Coding and Blogging?

I've debated whether coding is art.

Writing prose has a lot of similarities to writing code. Both activities require a lot of time spent inside a text editor. The key difference is the final product. When writing prose, the audience sees the final written letters. When writing code, the audience sees what the software does, not what it is in its raw form.

Coding seems more like a craft than an art when you consider that it's one key part of software engineering. This difference is even more pronounced when considering the Pulitzer Prizes.

The Pulitzer Prize board usually awards a prize in each category to a single person. Yet, there were a lot of people on the team who contribute to the winning book, news report, or editorial cartoon. Compare that to making movies or software which require large teams. Software released today is not written from scratch, like a book or poem. This is obvious when you consider the OS and code library dependencies.

Blogging, on the other hand...

The Question is Begged

Why is there no Pulitzer Prize category for blogging?

I wholeheartedly believe there should be a Pulitzer Prize category for individual blogging. After all, Pulitzer awards their prizes to individuals. Some of their prizes are for journalism and some are for art. Are the Pulitzer's about the content or the medium? Meta-blogs have won Pulitzer Prizes, such as the Huffington Post. But I would no longer consider Huff Post a blog, like, say, TechCrunch. Rather, HuffPo is an online journalism news source. There's a distinct difference.

Bloggers are doing important work. The Pulitzer Prizes should formally recognize this with its own category. When it is, I shall nominate the Scripting News blog. Not just for being around for 20 years, next month, but rather for defining what the true essence of blogging is.

If you agree, then please let the board of the Pulitzer Prizes know:
pulitzer@pulitzer.org

Thursday, August 28, 2014

Why is it Called Coding?

This piece is a response to this morning's Facebook and scripting.com post, What "coder" means and why it's bad by Dave Winer. I originally wrote this as a Facebook comment and then posted it here on Mea Vita: Carpe Diem.


Dave, I read your Facebook post and scripting.com piece, What ‘coder’ means and why it's bad. I think you and I are seeing eye-to-eye.

Some might say we’re splitting hairs, but this distinction is important since good definitions make for clear ideas.

The terms coder is too generic to describe what we do. The problem is similar to a painter. The guy who paints my living room is a painter; and so is Frida Kahlo and Pablo Picasso. The former has to paint inside the box, the latter outside. While we also describe the latter as artists, that term also has problems since it's too broad.

What's the Difference?

Coding is what we do when we write code and it's what a writer does when s/he writes prose. But we are not coding when we design a database, develop an API, or architect a server farm. These tasks are more than coding, they’re engineering since we’re synergistically engineering software. The sum is greater than its parts.

I think all software engineers are coders, but all codes are not software engineers. Perhaps programmer and coder are closer synonyms. The problem with the term programmer is it's not specific enough. Programmers can also be DJs who strategize radio station formats. In an archaic sense, programmers used to also be people who set up manual programs to sell large amounts of stock (this is different than computer guided program trading).

Some companies put programmers in restrictive boxes by only allowing them to code to spec. It’s an assembly line that lacks innovation and squelches initiative. It's much like the painter I hire to paint my house who has no say in the color or pattern.

Coding just happens to be the most visible part of software engineering. It’s dynamic, mostly linear, and more tangible than an API or database schema, so that’s how the layperson describes it. Plus, it's a single word. It's much catchier to say, "I'm coding," vice "I'm software engineering."

PS – I noticed that I have a coding category on my blog, but I don't have a software engineering category. That's changing with this piece.

Thursday, August 14, 2014

Monitoring How Things Fail

As our world becomes more complex, things fail in ways we never expected. In the military, we trained for different scenarios so we had documented responses. I see much less of this scenario based training in high tech simply because it's too complex to cover critical, unimagined, failures. Grace Hopper captured this sentiment best when she said, "Life was simple before World War II. After that, we had systems."

This week I started reading a book I heard about on NPR, The Checklist Manifesto. The author, Atul Gawande, is an accomplished surgeon. He noticed that seemingly small, yet critical, bits of information were overlooked in the operating room. Borrowing from the experiences of airplane pilots, Dr. Gawande began using checklists before operating on patients resulting in fewer mistakes in the OR.

When I originally launched Adjix I encountered different ways that servers could fail. A few incidents stick out in my mind.

Don't Backup Onto Your Only Backup

Always have a working backup. This seems obvious. I noticed one of my Adjix servers had slow disk I/O – in other words it seemed that the hard drive was failing so I backed it up onto a backup drive. Unfortunately, the backup never completed. I was left with a failed server and an unstable, corrupted, backup copy. The important lesson I learned here was to rotate backups. Nowadays, this should be less of a problem with services like AWS.

A couple years later I saw a similar issue when I was consulting to a startup that maintained two database servers in a master/slave cluster. The hard drive on the master server was full causing their website to go down. Their lead developer logged into the master server and started freeing up space by deleting files and folders. In his haste, he deleted the live database. When he logged into the slave database he discovered that his delete command had replicated which deleted his slave database. Their last offline backup was a week old. He was fired as the rest of the team took spreadsheets from the operations and sales departments and did their best to rebuild the live database.

How Do You Define Failure?

Apple uses WebObjects which is the web app server that has powered the iTunes store and the online store since they were created. WebObjects included one of my favorite RDMS, OpenBase. The beauty of OpenBase was that it could handle up to 100 servers in a cluster and there was no concept of a master/slave. Any SQL written to one server would be replicated to the others within five seconds. This is very handy for load balancing.

OpenBase's process for clustering was elegantly simple. Each database server in the cluster was numbered, 1, 2, 3, etc. Each database would generate its own primary key, 1, 2, 3, etc. These two numbers were combined so that the number of the server was the least significant digit. For example, database server #8 would generate primary keys like 18, 28, 38, 48, etc. This ensured that each database server's primary keys were unique. The SQL was then shared with all the other databases in the cluster.

Here's where something looked better on paper than in the real world. If one of the servers failed, it would be removed from the cluster. The problem was, how do you define failure?

If one of the database servers was completely offline then that was clearly a failure. But, what if the hard drive was beginning to fail – to the point that a read or write operation might take 20 or 30 seconds to successfully complete? Technically, it hasn't failed, but the user experience on the web site would be horrible. One solution would be to set a timeout for the longest you'd expect an operation to take, say five seconds, and then alert a system admin when your timeout is exceeded.

Who Watches Who?

When I launched Epics3, I had to monitor an e-mail account for photo attachments. I used a Java library that implemented IMAP IDLE which is basically an e-mail push notification standard. Perhaps there was a limitation in the Java library I was using, but IDLE simply wasn't reliable in production. It would hang and my code had no way to detect the problem. My solution was to simply check the mail server for new e-mail every ten seconds. This was a luxury I had since my bandwidth wasn't metered and Gmail didn't mind my code frequently checking for new e-mail.

Like Adjix, Epics3 was a WebObjects Java app. WebObjects uses a daemon, wotaskd, that checks for lifebeats from my app. If the app stops responding, wotaskd kills it and restarts it. The problem I had is that my Java thread would sometimes hang when checking for new e-mail. The app was alive and well, but the e-mail check thread was hanging. The solution to this problem was to have the e-mail check thread update a timestamp in the application each time it checked for new e-mail. A separate thread would then check the timestamp in the app every few minutes. If it found that the timestamp was more than a few minutes old, the app would simply kill itself and wotaskd automatically restarted the app. This process worked perfectly, which was a relief.

Things don't always fail as we imagined so it's important to avoid a failure of imagination.

Monday, August 4, 2014

Unable to Decode Playground Data

I've been coding in Xcode's Playground using Swift for the past two months. It seemed that my code was touching on an Xcode edge case causing it to stop evaluating with an error message: "Error running playground. Unable to decode playground data." I can tell where the problem is since the playground sidebar stops displaying output at the line of code that's choking. But I can't tell what the problem is.

The line of code having the problem is a function call at the end of a do..while loop. I initially thought my string manipulation was causing the issue since Swift strings are a little different than the Java NSString that I'm used to in WebObjects.

Narrating One's Work

I figured it might help if I wrote about my issue. Perhaps someone else is having the same problem. A quick Google search shows that a few people are encountering the same issue. But too few are having this problem to find a definitive solution other than chalking it up to an ongoing Xcode bug.

Almost There

I initially thought I had discovered the cause, earlier today, when I changed the half-open range operator to a closed range operator (i.e. I changed ...  to ..<). Once I made that change my playground compiled all the way to the end. But this was a short-lived victory when I restarted Xcode and the playground error returned. Toggling between the half-open and closed range operators at least gets my code to compile and run in the playground. So, perhaps I'm getting closer.


Monday, June 30, 2014

Swift First Impressions

Hiking the Pacific.
Earlier this month, Apple announced a new programming language, called Swift. It's designed to be fast, safe, modern, and interactive.

Swift is big news since it was unexpected. I haven't been this excited to learn a new programming language since Sun released Java in the mid-1990s.

Java and Swift both took about four years to develop. But, Swift is already more mature than Java 1.2 was two years after its initial release. Sun used to have an office down the block from the Apple Campus at Mariani One. When I started working at Apple, in 1998, we joked that we could hear the Java API's deprecating across the street in the Sun building. I do not imagine many Swift APIs deprecating anytime soon since they're based on, and bridged to, Cocoa's Objective-C APIs. (Coincidently, Java 1.2's codename was Playground which is the same name of Swift's interactive coding environment.)

Getting My Feet Wet

Last week I created a couple simple Cocoa Swift apps for both OS X and iPhone. It was a piece of cake. To this day, I still love that I can drag between my code and my UI in Interface Builder to link up ivars (outlets) and functions (actions).

I spent most of this past Saturday getting up to speed on Swift. I read the docs and watched a few tutorials including the Introduction to Swift, Intermediate Swift, and Swift Playgrounds videos from WWDC 2014. So, after hiking half a dozen miles along the Pacific, yesterday morning, I decided it was time to dig deep into Swift.

What to Code?

I wanted to write an algorithm requiring a fair amount of trial and error without much mental heavy lifting. For me, the answer was string parsing. I once spent a long time coding, testing, and debugging Java (Eclipse with the WOLips plugin for WebObjects) to parse SMS text messages for newspaper classified ads:
My Tweet Storm Swift Playground
Sell  (Item name)  ($Price)  (ZIP code)  (Item details)

For my first real Swift task I reverse engineered Dave Winer's Little Pork Chop algorithm which lets you send out a tweet storm. Basically, Little Pork Chop breaks up blocks of text longer than 140 characters into tweet size chunks.

Swift and Powerful

In a nutshell, Swift's Playground is worth its weight in gold. The Playground's interactivity is powerful. Typically, I write a few lines of code, compile it, and then run it. I would never recompile an app after writing every single new line of code. But that, effectively, is what Playground does for me. It kept me focused and my code clean. I caught my syntax and logic bugs in real time as it displayed my variables and loop counts. Even better was that my infinite loops were immediately visible in the sidebar. I can see myself writing all my algorithms in Swift's Playground before copying them to my projects.

Swift Playground Gotchas

On the flip side, it doesn't seem possible to step through code, so loops execute until they terminate. Once a loop completed I could see what happened by examining the value history or using Quick Look.
Quick Look into an array and then into each element.

Another wrinkle I encountered was trying to get the character at a string index. The best solution I came up with was fairly ugly:
var currentChar = String(Array(subTweet)[indexOfCurrentChar])
This might be due to the fact that an index into a string array element, which is usually equal to the number of bytes, doesn't hold true if you're using UTF encoding where each character may require multiple bytes. If that's the case, then I need to write my own String indexer. Please let me know if I'm off base.

Free Code

It took me about two and a half hours to code and debug my tweetstorm algorithm in the Swift Playground. Am I getting Swift? Yes, I most definitely am, but I'm probably still using old coding patterns while I learn the slick new modern Swift syntax.

One final piece of Swift beauty is that you can download my Tweetstorm Swift Playground and run it for yourself in Xcode6-Beta.


Tuesday, April 29, 2014

To Offshore or Not to Offshore at Apple?

When I worked at the Apple Online Store we were organized in teams of six.

My team consisted of four software engineers, one project manager (Scrum Master), and one QA engineer. The QA engineer was a permanent part of our team. He was a white-box tester. We wrote our own unit tests and demonstrated scalability with our component tests while our QA engineer reviewed our logic. He'd look for obvious issues like uncaught null pointer exceptions while digging deeper in search of ambiguous cases like poor security implementations.

New Code Here

Any issues our QA engineer found in our code, during a sprint, were fixed by the software engineers on our team. The rest of the bugs (outside of our recent sprints) were entered into Apple's bug tracking system, Radar. Then, once a week, we'd meet to prioritize bug fixes in Radar. We off-shored the bug fixes to India since it wasn't sexy work. Once it was fixed, we reviewed the code and verified it before integrating it into the main branch.

Bug Fix There

Offshoring bug fixes worked beautifully. Each bug was clearly documented: what really happened vs. what was suppose to happen. I had no idea who was on the other end fixing our bugs but I realized they were intelligent and hard working. However, I could tell they weren't experienced with our technology (WebObjects) or conventions. As one example, I reviewed code where the offshore team had hard coded SQL queries directly into Java. Other times, I saw Java objects instantiated simply to access static Java methods.

The beauty of offshoring a bug fix is we could focus on new feature development.

New Code There

Since offshoring bug fixes worked so well we decided to give them a shot at new development. We quickly discovered that was a mistake. The offshore team didn't have enough context to write good code. Their implementations were too brittle.

This problem frequently happens in any coding organization that's offshoring new development. Without a product roadmap, the offshore team simply writes code to do exactly what you asked for; and no more.

I never saw requirements sent to an offshore team to refactor code. That would be too nebulous of a task. By the time I would have documented all the ins and outs of a refactoring requirement,  I could have written the code myself. The real problem is that we don't know how our code will behave until we run it.

And that was the crux of the problem with the code written by the offshore teams I've dealt with. They could only do exactly what you asked for, now, without knowing what was coming. Plus, the requirements had to be very explicit. 

Software engineering isn't an event – it's a process. It's a process of continue improvement and refinement. It's iterative.

Tuesday, April 15, 2014

$5,000 Security Breach, Part 2


Every so often I write a blog post that immediately receives many thousands of views. Part 1 of this story fell into that category.

Where I last left off, on Thursday, I was in the shower when I had an epiphany. I had figured out how my Amazon Web Services credentials were compromised. At least I suspected, but I was running late, after my call with Amazon, as I got ready for the Spring Fling tech event. I didn't have time to comb through my public repository account so I deleted my entire GitHub account. I had only used it once, years ago, when I checked in an open source WebObjects project I had developed.

Jodi Mardesich interviewed me for the details and gave my story a great write up at ReadWrite.

Coda update: Amazon has confirmed that they'll grant me a one time exception for my faux pas.


Author: Joe Moreno

Thursday, April 3, 2014

Lazy Programming

There are two types of lazy programming, good and bad.

Good Lazy

Lazily instantiating and populating data structures is a perfect example of a good design pattern. This technique is also how CDNs populate their edge servers. Don't create or store anything until the last possible moment.

When implementing this technique, I use accessors that have the same name as my instance variables (ivars). Below, my _employees ivar is set to null when a class is instantiated and it's not populated until the first time it's touched (accessed). This is the beauty of key-value coding accessor methods.

private NSMutableArray _employees = null;

public NSMutableArray employees()
{
    if (_employees == null)
    {
        this.setEmployees(new NSMutableArray());
    }
    return _employees;
}

public void setEmployees(NSMutableArray newEmployees)
{
    _employees = newEmployees;
}

Depending on my performance requirements, this design pattern would work if I needed to save memory. However, if memory isn't an issue, but, rather speed, this might not be an ideal solution since each time the employees() method is called there's a an O(1) test performed to see if the private ivar is null. In cases where speed needs to be optimized then it's best to pre-populate the data structures (caches) before the web app begins accepting requests. At the Apple Online Store, we pre-populated only when necessary. In every case, though, the key is to avoid premature optimization.

Bad Lazy

The goal of a software engineer is to provide the best possible user experience (BPUX).

As a programmer, I'm not shooting for perfection but I know when something can be done better. (If I went to sleep last night then I had time.)

If I have to code something that's singular or plural I'll go out of my way so it doesn't read:
You have 1 item(s) in your cart.

It's not very hard to code:
You have 0 items in your cart.
You have 1 item in your cart.
You have 2 items in your cart.

There is no shortage of websites where I've entered my phone number (760.444.4721) or credit card number (4111-1111-1111-1111) only to hit enter and been told I made a mistake and my digits need to be reentered with only numeric characters.

Some programmer had to go out of their way to search the string I entered, confirm there was a non-numeric character, and then return an error message to me. This is my big pet peeve – it's too in-your-face. I entered all the information the programmer needed and they could have parsed out the digits. When I'm coding, I simply write a cover method to return only the numeric digits.

Software engineers aren't sales people, so they don't live the ABCs.

Friday, March 14, 2014

Writing Words, Writing Code, Hemingway Style

One thing that motivates me to write is reading great writing. Whether I'm writing words or writing code, the ability to capture an idea and write it in an impactful way is powerful.

Hemingway – much like Apple – knew how to pare away the cruft to get to the heart of the experience.

When I first began writing fiction I read Hemingway's short stories for inspiration. The first one I read was The Snows of Kilimanjaro where he vividly described a scene without explaining the details.

...he and the British observer had run too until his lungs ached and his mouth was full of the taste of pennies...

This description hit me like a ton of bricks. In this single sentence I understood Hemingway's writing style. When someone's shooting at you the adrenaline deep in your throat tastes exactly like copper pennies. Hemingway had seen combat – he knew what adrenaline tasted like – so there's no need for him to explain it.

The West Wing: "You tasted something bitter in your mouth.
It was the adrenaline. The bitter taste was the adrenaline."
A reference to the bitter taste of adrenaline is also seen in an episode of The West Wing. Josh Lyman is in denial about his PTSD after being shot. A Yo Yo Ma performance triggers a PTSD episode and a psychologist jump starts Josh's counseling session by telling him about the bitter taste.

Hemingway left out details which pulls in the reader rather than shuts them out. That's hard to do. And Hemingway knew exactly what he was doing which he described in his essay, The Art of the Short Story:

A few things I have found to be true. If you leave out important things or events that you know about, the story is strengthened. If you leave or skip something because you do not know it, the story will be worthless. The test of any story is how very good the stuff that you, not your editors, omit.

Writing workshop

It's pure chance I came across Joyce Maynard's writing workshop, last spring, that lead me to her home in Mill Valley to work on my writing. There's nothing better than being taught by a woman who's earned her living as a writer her entire adult life. I'm writing this piece, today, because, yesterday, she pointed out that even the best writers have to handle rejection. And it's through Joyce I feel a connection to Hemingway since she lived with J.D. Salinger and Salinger met with Hemingway during WW II.







Saturday, July 27, 2013

Oversimplifying Simplicity

The Way to Eden.
I'm reading Ken Segall's thoughts and experiences while working with Steve Jobs. He's had so much interaction with Steve while at Apple and NeXT that he's a cornucopia of best design and marketing practices.

Segall talks about how "one" is the simplest of concepts. It's an intriguing philosophy – there was even an entire episode of Star Trek dedicate to this concept and its followers.

This belief in "one" is why Apple's mice, track pads, iPhones, etc., from the beginning, have only one button. One is where it all begins.

What's the simplest numeral system? It's not base 10 (decimal) since you have to memorize 10 different digits. Is it base 2 (binary)? After all, computers and human DNA use binary to store information (ones and zeros, or A-T and C-G combinations). Certainly binary is the simplest? Au contraire; how many people can convert 1010 from binary to base 10? Not simple... not simple at all.

The simplest numeral system.
It turns out that unary is the simplest numeral system for representing natural numbers – in other words, unary uses just ones. This is how a cave man would keep track of "How many?" things he owed, even before there was written or spoken language. Take a pile of rocks and for each one of something, you move a rock to another pile. Do the reverse when taking inventory.

This is how a bouncer counts people at the door or how the simplest of card counters tries to beat the house at blackjack. We've all used unary to keep track of things when we tally items with four slashes and then a diagonal.

Something Simpler?
Where I disagree with Segall's thinking is when he points out "zero is the only number that's simpler than one." Ironically, this not the case as I learned from my assembly language professor, Mr. Lee. If you think back to when we learned Roman numerals in grade school (I, II, III, IV...) you'll quickly realize that there was no numeral for zero. This is also true in other ancient civilizations' numeral systems such as Chinese and Arabic. As simple as zero seems, it's a fairly complex concept to have nothing of something – just try to ask any handheld calculator to divide by zero and you'll see that it does not compute.

Trying to be simpler than the simplest makes things more complex.

Sunday, June 30, 2013

Technical Interviews: The Missing Piece

Typical Interview Question: Write a Java method to reverse a string.
TechCrunch had a recent piece about the demise of the technical interview. In software engineering, the technical interview involves writing code. Companies like Amazon and Google have a reputation for asking brain-teasers such as, "How many gas stations are in the US?" or "How many Ping-Pong balls can you stuff into a Boeing 747?" The idea is to see the job candidate's thought process. While these questions are mentally challenging, it's probably not the best indication of how good the candidate is at programming.

I've been through a number of software engineering job interviews where I've been asked to write code and discuss fundamental computer science questions. Writing code is an important part of hiring software engineers and it definitely has its place in the job interview process. And, it's perfectly okay for the candidate to make typos or have syntax errors when writing computer code on a white board. The idea is to see if the candidate understands the fundamentals of computer science such as Big O notation when it comes to a binary tree [O(log n)] or hash table [O(1)] or the basics of recursion and language syntax.

The Missing Piece
One thing I've noticed missing from all my job interviews over the past 15 years is that no one has ever asked me to show code that I've already written, refactored, and trusted for many years. The beauty about reusing code that either I or someone else has previously written is that code you don't have to write is code that you don't have to debug.

Software engineers who live for and love writing computer code have many side projects. You'd be hard pressed to find a good software engineer who doesn't have something currently deployed whether it's a web application or smart phone app. Just like an artist has to paint, or a poet has to write – regardless if they're paid or not – a coder has to code.

The Alternative
The current software engineering interview at a decent tech company involves a series of 45 – 50 minute long interviews where a pair of employees ask the job candidate questions. This process can last four to six hours and the key part that's missing, today, is where the job candidate gets to show off what they've previously written and released. This is especially important for a 40+ year old job candidate who should have a massive bag of tricks since they've probably been coding, on a daily basis, for more than a quarter of a century.

Instead of multiple 45 minute interviews with two employees and a job candidate, it would be much more effective to have a couple 90 minute interviews with four employees where the candidate can show how they architected, coded, and deployed a website or smart phone app. Ideally, the candidate could ssh into their live servers to show the details, challenges, and architecture of how a web app works while showing off the code that he/she has written to accomplish it. Writing code on a white board is very academic; seeing code that a candidate has deployed and maintained over several years is about as real as it gets.

No company would hire a graphic designer without seeing the job candidate's portfolio so why don't tech companies demand the same thing from software engineers?

Saturday, May 25, 2013

The Art of Coding at Any Age

May 26, 2013 update: Dave responded to this post with a podcast.

I've written many lines of code sitting here, where I wrote this blog post.
Yesterday, Dave Winer shared a New York Times interview with Billy Joel where the Piano Man said, "I thought there was a mandatory retirement age at 40, but then the Stones broke that barrier."

Dave was born the same year as Steve Jobs, Bill Gates, and Yo-Yo Ma. That means Dave is well past 40 years old which was thought to be the age when computer programmers (coders) were brought out back and shot. For most programmers, a successful career in Corporate America means up or out. You start off as a software engineer, then you become a tech lead followed by forgoing coding to manage direct reports.

People like Dave and I, who enjoy the trenches of coding past 40, are not the norm. A 30 year old software engineer looks upon a fellow programmer, 20 years senior, as out of touch. (Except, in rare circumstances, when they're looked upon as one of the greats who works close to the kernel meaning that they really know what they're doing.)

While Dave has successfully created and sold businesses, his first love is programing. A couple days ago Dave made a point that if he were a visual artist or musician then no one would bat an eye at the fact that he's closing in on 60 while still coding every day. No one asks, "Why is Yo-Yo Ma still playing the cello? Why hasn't he moved on to conducting?"

Elegant Code
Dave also asks, "Why can't people see that this [coding] is an art?" That's a very good question and I love Hugh MacLeod's comment, "Art’s purpose is to express consciousness." While software usually serves a process purpose, we still write it much like we'd write most anything else in a text editor such as a novel. But is it fair to consider it a form of art?

What is the purpose of art? To create. To inspire. To express one’s self. To make one aware of one's surroundings. To make life better, etc. There's no simple definition. To me, holding most any modern Apple product feels like holding art, albeit a highly functional piece of commercial art.

The problem with code, as a form of art, is that people don't see what was created, rather they only see what it does – its function. Code has function. Art has design. Code is a means to an end application. Fine art focuses more on aesthetics than utility. I cannot think of a form of art where the work created is not what's seen which is different from a computer program that gets translated into machine code. Consumers of code only see the interface, not the implementation. Again, this seems very different than, say, a movie since the general public critiques and studies the film, not how it was made.

Art is displayed or performed. Where would people observe code? In its raw form or its final application form? One could argue that an Apple Store is a museum for displaying products of art like a gallery, but the challenge with code as an art is it doesn't exist without a medium. While that's technically true of, say, a poem, I can still hold O Captain! My Captain in my hands in its final form.

The creative process of programming is definitely more art than skill, much like writing a story, suitable for someone to do at any age. Perhaps the ageism in high tech is due to the ever changing technology as more senior programmers stick with older, more comfortable, systems? But, in software engineering, like rock and roll, perceptions will change regarding age.

Chances are, though, if you're still coding on a daily basis, into your 50s, using current technologies, then you're undoubtedly very, very good. To Dave, I ask, "Who else, past 50, codes like you?" Maybe it's time for a Museum of Computer Programming with Ada Lovelace and Grace Hopper inducted into the hall of fame.