Wednesday, December 17, 2014

Treadmill desk as a software developer - 3 month

So we've been going on three months now. I'm able to walk about 30K (~14 miles) steps at least twice a week without feeling too bad. I've killed my tennis shoes that I bought during the summer, so I guess one side effect is that I'll be going through shoes much faster now.

Things I've noticed:

  • There was an online conference, that I was watching while I was working. This was great, but given that I was already switching between 2 engaging contexts, I didn't notice that I was walking, and I over did it again (37K steps, 2 days in a row before I noticed what was happening - I guess that means I'm getting better endurance). 
  • On the day that my fitbit battery died, I wasn't motivated to walk until it was charged. Apparently I definitely want my steps to "count" 
  • Grading my class assignments is MUCH more enjoyable while walking. Previously I could barely get through 1 without needing a break, now I can do them all in one "sitting"
  • There is a nice pile of dust, crumbs, etc that ends up on the mat at the back of the treadmill. 
  • I hardly ever sit down anymore at work. Although I do still stop and stretch occasionally. As I try to hit 30+K once or twice a week. 
  • I keep a set of hand weights on my desk for meetings, where I don't have to participate or take notes. 
Overall I've lost 12 lbs (fluctuating wildly as we go through Thanksgiving and the holidays :))

Still nothing but good things to say about the treadmill desk.

Monday, December 8, 2014

What I learned about coding from 5th graders

So today I was mentoring / volunteering for Kolter's Hour of Code (www.code.org) and I got to observe 3 different 5th grade classes come through to try to program a visual game like Angry Birds or Lightbot.

Just for context, both of these games involved sequencing directional puzzle pieces to perform a task. For Angry birds, you would move the bird towards the pig, and for Lightbot, you'd move the robot along a grid lighting blue squares.

Not surprising nobody read the directions or prompts before diving right in. The part that I found interesting, was that very few would try to solve the problem in small steps.

Nearly every pair (they were pair programming.... woot!) whether they read the prompt or not, tried to solve the entire puzzle before hitting play. It of course wouldn't work. Then they'd throw nearly the whole thing away and create another complete solution, which also wouldn't work.

One of the things that I coached them on was to test often.... put down one or two commands and see where you are on the sequence. Basically to get faster feedback.

So I'm curious as to where in life do we learn that we have to solve the whole puzzle at once?


Tuesday, October 7, 2014

Treadmill desk as a software developer - 1 month

Last month I posted my first week experiences on my treadmill desk: http://squaredi.blogspot.com/2014/09/treadmill-desk-as-software-developer.html

Now I'll be moving into my one month review.

My metrics are:
Steps: 19.3K / day avg (darn weekends killing my average :) )
Distance: 11 miles /day avg
Max steps / distance: 31K / 14 miles

So far I've lost 5 lbs. I'm stretching every night. I'm still taking breaks as needed, but overall I'm taking standing breaks instead of sitting breaks. I have to have some foresight into the evening plans though. If there is a kids soccer game where I'll be helping run around, or a dance lesson that I'm teaching, I will need to stop walking early afternoon and sit so that my legs are capable of the evening activities.

I've noticed an increase in my endurance, where I've gotten 30K steps in a day  a couple of times now and not been sore. I was also able to run the length of our street. (Note, I'm not particularly fond of running, but I was late for pickup from school. I was happy that I made it the whole street length, which is longer than I've run in recent past).

Given that I'm farther from the ground in the office,  both with standing, and with the small step up onto the treadmill, that fans alone don't cut it. I've had to lower the A/C to help keep the room comfortable.

But the bottom line is that I'm loving it. I love being able to walk this much everyday and that it is not something "extra". I love that I can walk and work at the same time. I love that for a having a desk job, and working from home, that I'm lightly active to fairly active 6-8 hrs a day.


Tuesday, September 9, 2014

Treadmill desk as a software developer

Background:
I got a treadmill desk and will be writing my review from the perspective of a software developer who works from
home.

All metrics are tracked from fitbit, and just as a baseline, I average about 10K steps a day without the treadmill, but it has taken some dedicated effort.

Day 1(Thurs)
Based on the corner of the office that I put this in, I noticed that my overhead ceiling fan was less effective. I didn't break a sweat on the treadmill, I just needed to move some floor based fans to me. (FYI all of the computer equipment in my office gets warm, and instead of air conditioning the whole house, I use several fans in my office. )
I found myself looking forward to meetings, especially those that I wasn't an active participant, so I could up my speed to around 3 mph. If I'm an active participant I found that 2 was about right for me.
The treadmill is super quiet and my colleagues couldn't hear any background noise coming from my end of the call. Personally though, due to my own cognitive processing inability to separate noises, I found that I needed to move to loud speakers / headphones so I could hear better. But as I said, even though my hearing is fine, I know that I'm not able to separate sounds, so the treadmill, fans, and meeting was slightly indistinguishable for me.
I didn't get bored as much, so I found myself talking less breaks. I also noticed that when I did feel like taking a break, there was a "stop" action. Literally pressing pause on the treadmill, was a more commitment to the break, then my usual get up to go to the bathroom or grab a snack from the kitchen. 23K steps - a little more than double my average up to that point

Day 2 (Fri)
Okay, I was a little stiff today. Clearly I pushed myself too hard. Today my pace was more around 1.5 for the whole day. I found myself inverting my day. Previously my breaks were to get up and walk around. I'd go to the farthest bathroom in the house, or to walk around and pace while eating my lunch to get more steps in. Today I found myself not doing those things. I even took a couple of breaks just to go sit on the couch for a couple of minutes. Then around 2pm. I couldn't walk anymore. I moved the laptop back to my desk and sat for about an hour or so. That night I had to stretch.

Stretching isn't a bad thing, it means that I'm actually doing something with my muscles. 22K steps, which is still more than I was planning. I was hoping to only get to around 18K today.

Thankfully a weekend is coming up, so I can start fresh again on Monday.

Day 3 (Mon)
Spent all day at 1.5 MPH. Still needed a significant sitting break around 2 pm. I've been drinking alot more water, because I also realized that I wasn't drinking enough the other days. It is really easy to overdo it, especially if you hit flow for a couple of hours. One tricky bug and 4 hours later I've walked nearly 6 miles. Unfortunately I hit 30K steps today. That is my all time high. I stretched a lot, but still woke up sore.

Day 4
Today I'm going to sit through the morning, and then start walking for my noon meeting. I only walked on the treadmill for 2 hours today, but since it was mostly a non-participatory meeting, I still managed to eek out 20K. In addition to stretching I had to use my wife's foot massager.

Day 5
So here is my new plan: I'll start the day sitting, then probably just after lunch I'll start on the treadmill. I'll put in an hour or two, then move the laptop to the desk. I'll leave the laptop unplugged and when I run low on battery, finish out the rest of the day on the treadmill. That should get me around 20K steps.

Day 6:
Throwing in some tech geekery. Given that I'm walking for a couple of hours and sitting for a couple of hours; I've got two separate external monitors that I'm hooking up to. The one on the treadmill is 25" and the one at the desk is a measly 17". It was very annoying that whenever I switched, I lost the locations of my windows. Be on the lookout for another blogpost regarding Slate for managing my Mac windows state across multiple monitor configurations. I just started with it today, and it is awesome, but will need to dive in more to be able to write up the post.

Week 1: Metrics
19.5K step average
62 Miles



Day etc.
I'll do a followup post in a couple weeks, after a month or so of use. But as of today, I'm still following the sit in the morning, walk in the afternoon approach. This works well for me, that way I'm walking when I'm most awake / alert. I do need to stretch every night, which is not a bad thing. I'm feeling really good and haven't noticed a decline in productivity - actually I feel more focused when I'm walking. I have discovered that if I'm in the middle of something, and don't want to shift spaces from the treadmill to the desk, I'll sometimes just stand at the treadmill for a break. That works better than I expected. It is quiet enough to use at night with the kids asleep in the next room, but the kids informed me as they were going to sleep, that if I had my speed too high, that my steps were too loud (me walking, not the machine). I had one co-worker that could hear the treadmill  (my steps) when I wasn't on the headset, but just on the computer, but when pushed, she said she really had to strain to hear it and it was very faint.



Friday, July 25, 2014

No Fluff Just Stuff - Lone Star Symposium Review

The NFJS conference (July 2014 in Austin) was a perfect fit for me. It markets to a Java audience, but there was plenty of other topics covered. For me there was the technical Java track (immediately relevant to my current work), a functional programming track (future focused), and a soft skills track, which I found to be a pleasant and perfect balance. I'm definitely going to go back!


I started out with Javascript Design Patterns by Pratik Patel. He was particularly found of underscore.js, not only for the features set that it offered, but it's use in the mixin pattern, which he asserts is easier to maintain than the decorator pattern. I personally liked his example of this command pattern, which I'm showing below.




Java memory by Ken Sipe was really deep, but it explained a whole lot of stuff that I never understood. Essentially the JVM segments the memory into 2 spaces, new and old. The idea being that for the most part, there are two types of objects occupying memory: the super short, like a message or request; and the incredibly long, like models which are needed for pretty much the whole time the server is up.

Given this idea, there is two main segments of memory, new and old. The new space is divided into Eden (brand new), SS0, and SS1, where SS* is a "survivor space" for not new, but not old objects. The old space is also divided into Old and PermSpace (Aha! that is what the JVM config -XXMaxPermSize goes). 

(BTW, -XX is an experimental flag, while -X is standardized)

The idea is that GC is misnamed, it is not garbage collector, but really a garbage avoider. It actually never touches the dead, but does promote the living to the next level (from Eden to SS*, then to Old, etc) 

There are two types of GC, major and minor. Minor happens only when Eden is full, which moves the living the the SS space, and resets Eden to write over the dead. Major GC affects all areas of memory and is slow and should be avoided if possible. The goal is a high morbidity rate in new space, and a very low morbidity in old space. 
After optimization
Before optimization

XX:NewSize = 256m

XX:MaxNewSize =256m

XX:SurvivorRatio = 8

XX:PermSize = 32m












Finally he gave a list of tool recommendations for Java tuning
  • JPS
  • JSTAT
  • JMAP
  • JHAT
  • VisualVM* (his favorite, open source all in one troubleshooting tool)
  • VisualGC
  • MAT

Functional JS by Pratik Patel was a great overview of the difference between functional and OO.  Basically if comes down to that OO in JS is messy for the things that we need to do, and functional is a "recipe for simplicity". Again he talks about underscore.js, but this time we go deeper into

  • composing
  • currying (special composition where first argument is the composed function
  • chaining
Transitivity is defined as "always getting the same result with the same input"
And in general OO talks about nouns, where FUN talks about verbs. 



Narcissistic Design by Stuart Halloway


Just as setting the context, Stuart is extremely passionate about Clojure and functional programming, so he generally exposes all of the evils of OO. And this talk is a humorous portrayal of that.  Although if you listen with an open mind, putting aside your OO education, there are some interesting points of view to consider.

The premise behind his talk is that by thinking only about "me", that you can add lots of complexity to the code base, all in the name of best practices. So here is a list of best practices that increases the complexity of the code base and makes it harder to share.

  1. Embrace setter methods :
    Setters undermine the two best parts of OO: constructors and interfaces
  2. Prefer APIs over data:
    data forces decoupling, while API couples by temporality, language, mutability, semantics, and esoteric features
  3. Start with DSLs (Domain Specific Languages)
  4. Always connect, never enqueue
  5. Create abstractions for information
  6. Use static typing across subsystem boundaries
  7. Put language semantics on the wire
  8. Write lots of unit tests
    He argues that once tests are passing, they tend to always pass
    He also argues that generative testing would test more options then a handful of pathways
  9. Update information in place
    This practices comes from assumptions that memory / storage is expensive and resources are dedicated
    It is not needed anymore and immutable data facilitates easy sharing, distribution, concurrency, access, and caching
  10. Leverage context
One of the neat ideas was to learn some of the data languages like you would a programming language:

  • avro
  • bson
  • csv
  • edn
  • fressian
  • hessian
  • java
  • json
  • kryo
  • protobuf
  • thrift
  • yaml
  • xml



Build your own technology radar by Neal Ford of Thoughtworks was by far the best / most inspiration session of NFJS.

This is basically a snapshot in time by opinionated technologists. It is not comprehensive nor strategic. It is not a technology lifecycle assessment tool. It is ultimately a tool to montior ("keep on the radar") technologies.

"You can't tell that the technology is stable or collapsing when you are on the inside"

The interactive version is at http://www.thoughtworks.com/radar/#/

The rings of the radar from outer to inner are

  • Hold - proceed with caution (notice there is no "avoid" category)
  • Assess - worth exploring with the goal of how it will affect you
  • Trial - worth pursuing on a low risk project. Get to know its strengths, limitations, and use cases
  • Adopt - finished a trial and found usage patterns
    • Mason Razor: "I'll make fun of you at the pub if you aren't doing this"
Neal strongly encouraged each developer to make their own radar. It is part of your knowledge portfolio, that needs to by managed. A simple API exsits for plotting one already at https://github.com/bdargan/techradar

As far as evaluating new technologies, here is the litmus tests that were recommended:
  • Testability
  • Integratability
  • Learnability
  • External references (books, blogs, conferences talking about it)
One of the best tactical / practical / immediately applicable comments was to write unit tests in a different language. For example write Java unit tests in Groovy. Given that tests are a form of overhead, there is a benefit to make it as efficient as possible. Besides you get to practice in a new language. 

That night, I sat in the lobby of the hotel and attempted to re-write one of our test classes to Groovy. I ended up with 4 people pairing with me and it was SUPER simple. Within an hour, I had nearly the whole class in Groovy, and that was GDD (Google driven design, given that this was my first exposure to the language. 

Neal also strongly recommends that COMPANYS create a technology radar as well. 
  • Gives you a platform for continual analysis
  • Unified message to non-technical but interested people
  • Excuse to get together to have an impassioned conversation

"The future is already here, it just is not evenly distributed" 

Research shows that your next job comes from your weak links in your social network, instead of your strong links, like friends and family.

The top pitfalls that lead to agile troubles by Andy Painter was just a wealth of nuggets for my scrum master position.

I enjoyed this presentation so much, that I even asked Andy to join me for lunch, so we could continue talking. I had several questions for him, and he provided great insight.



  • Agile vs waterfall
    •  Agile is like flying an airplane, you continually have to course correct for wind, weather, traffic etc. Waterfall is like riding a train, where the plan has already been laid down for you to follow. 
    • You can't say that Agile is faster than waterfall because they are not the same product. It is an apples to baseballs comparison. You can't assume that the finished agile project is anywhere near the same thing that would have been delivered via waterfall
  • Scrum master
    • Is about teaching
    • In order to teach, they need to be able to listen
    • Also needs to provide a safe place to fail, so that learning / improvements can happen
  • Product Owner
    • Available, knowledgeable, and empowered
    • PO as proxy should be a temporary position
  • Learning from Failures
    • True failure is not working through the challenges and giving up
    • "I welcome failure, not because I desire it as an ultimate end, but because I recognize that any true success must be born through some amount of failure. And because of this allowance, I expect my team to be better next year than they are today. 
  • Cargo Cult
    • I hadn't heard this term before, but it describe the condition of imitating actions expecting to produce the desired results. 
    • For example, if a team says "we have a backlog, daily standups, and demos; therefore we are Agile", without understanding the why or treating style over substance, might be an example of the cargo cult. 
  • Standup
    • If what you did yesterday, doesn't equal to what you committed to in yesterday's standup, that is an impediment
    • It isn't a status meeting, but at daily planning meeting
    • It is a layer PDCA cycle - Plan, Do, Check, Act
    • If "I'm working on it" for a couple of days, means that there isn't enough visibility.
  • Velocity
    • It is a PLANNING tool, not an accelerator
    • By definition it is a constant rate of motion
    • In retrospect, it should have been named "pace" instead, to take away the stigma that it can be throttled
    • Best stats is from the last EIGHT (8) sprints (3-4 months). This encompasses the average amount of stuff - vacations, sick days, life events. 



Javascript toolchains by Nathaniel Schutta
Here is their take on the JS toolchain

  • Node.js - Even if not for the server-ness, it is needed for the other essential tools
  • Grunt - build tool
  • Jasmine - testing
  • Mocha -testing with built in async and code coverage
  • Karma - automatic test execution
  • Phantom - Headless browser for CI
  • Sinon - spies, mocks, and stubs
  • Istanbul - code coverage
  • JSHint - static code analysis (more configurable then JSLint)
  • Plato - static code analysis with complexity reports & maintainability indexes
  • Angular - MVC large application management
  • Bootstrap - CSS framework
  • Require - dependency management
  • Uglify - JS compressor



For the rest of the sessions that I attended, while they were interesting, I'm not going to summarize them.

  • Leading Technical Change, while good to hear again, was all common sense. There was a good quote that came out it though. "Attention is our best resource, don't waste it." "Attention is a bit like real estate, in that they're not making any more of it. Unlike real estate, though, it keeps going up in value" 
  • Web Security was amazingly interesting, especially in the fact that we didn't just go over the obvious vectors, but we actually did the OWASP's web goat security demonstration application as a group lab. That was AWESOME!
  • Clojure in 10 big ideas was great at getting my head around a purely functional language. It would do no good to summarize or list the 10 points, because without the 15 supporting slides for each point, it is useless. 


Book Recommendations from the conference

Tuesday, June 3, 2014

JPA Annotations and Migrating from SQLServer to Derby in unit tests: A debugging case study

Problem: 

Our service layer has a series of integration tests that rely heavily on the data in the SQL Server development database. This was fraught with issues.

  • If the status of a targeted record changed, the tests would fail
  • If the test had an error, it would fail to cleanup and there would be "unitTestUsers" left in the db. 
  • If you were debugging, and didn't finish test execution, it wouldn't clean up the record
  • It was slow and required VPN access 

Solution: 

Move to Derby, which is an in-memory Java database.

Obviously we knew that we'd have to create all of the lookup tables, like the us_states table, and the roles table.

But we ran into numerous bugs which warrant a blog post.  I'm going to list them in order and how I solved them, but like an onion, resolving one layer lead to another and another.

Background: 

We are using JPA Annotations in a JAXB framework to manage the persistence. Derby (should) read these annotations and auto-generate the tables.

Roadblock: 

Missing tables:

We were getting a dozen missing tables and the tests would have the following errors:
javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.4.2.v20130514-5956486): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.SQLSyntaxErrorException: Table/View 'RC_ACTIVITY_LOG' does not exist.
Error Code: 20000

After exploring typos, case sensitivity, persistence.xml settings and configurations, there were two solutions that came out.  The short version is that there were 2 errors that were happening during the create table phase.

Discoveries: 

Turning on logging to "Fine" in the persistence.xml will output all of the SQL that is being run.
<property name="eclipselink.logging.level" value="FINE" />

Then, the log is very, very long (obviously), and was being cleared from the console.
So then I saved the output to a file.

This is easily done within IntelliJ in the Run/Debug Configurations window. There is a 3rd tab called "Logs" and a checkbox to indicate "Save console output to file:"

The following errors were discovered:
Internal Exception: java.sql.SQLSyntaxErrorException: Column name 'IDX' appears more than once in the CREATE TABLE statement.  
Error Code: 30000

With the corresponding SQL Statement:
CREATE TABLE job_category (occupation_id INTEGER GENERATED BY DEFAULT AS IDENTITY NOT NULL, major_group_id VARCHAR(255), profile_nbr INTEGER, idx INTEGER, idx INTEGER, PRIMARY KEY (occupation_id))

Notice the double "idx INTEGER?"

After much research, finally found that it was coming from our Annotations:
@OneToMany(cascade = CascadeType.ALL, mappedBy="profile", orphanRemoval = true)
@OrderColumn(name="idx") 

Couldn't figure out why it was doubling, but changing the name="idx" was reflected in the error (still doubled), and leaving it out resulted in the default name doubled. If you have an answer or link to why it doubles, and a better solution, I'd love to hear about it.

Ultimately, ended up copying the SQL statement from the logs, removed one of the idx references and recreated the table at the beginning of our tests.

Here is the code for that:
    private static void createTable(String sql)  throws Exception
    {
        Connection connection = getConnection();
        Statement stmt = connection.createStatement();
        stmt.executeUpdate(sql);
    }

    private static Connection getConnection() throws ClassNotFoundException, SQLException {
        String strConnectionURL = "jdbc:derby:memory:TestDB;create=true";
        Class.forName("org.apache.derby.jdbc.EmbeddedDriver");
        return DriverManager.getConnection(strConnectionURL);
    }

But then we ran into another problem error creating tables:

Internal Exception: java.sql.SQLSyntaxErrorException: Constraints 'SQL140602131233001' and 'SQL140602131233000' have the same set of columns, which is not allowed. 
Error Code: 30000

With its corresponding SQL
CREATE TABLE rc_activity_log (rc_activity_log_id INTEGER GENERATED BY DEFAULT AS IDENTITY NOT NULL UNIQUE, created_on_dt TIMESTAMP NOT NULL, NOTE VARCHAR(255), user_id INTEGER NOT NULL, profile_nbr INTEGER, PRIMARY KEY (rc_activity_log_id))

And the Annotations: 
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name="rc_activity_log_id", unique=true, nullable=false)

Which means that because there is an @ID and unique=true, that it creates two identical constraints on the table, which causes it to fail. (Thanks https://java.net/jira/browse/GLASSFISH-3252)

Given that the code was working before the derby migration, and we didn't want to change any of that, We just manually created these tables as well.

Extra: 

One thing that was particularly helpful was being able to determine the actual tables that existed in memory. So from http://stackoverflow.com/questions/7411020/calling-derby-java-db-show-tables-from-jdbc we have:

public static void printDerbyTables() {
        //http://stackoverflow.com/questions/7411020/calling-derby-java-db-show-tables-from-jdbc
        try {
            Connection connection = getConnection();
            DatabaseMetaData dbmd = connection.getMetaData();
            ResultSet resultSet = dbmd.getTables(null, null, null, null);
            System.out.println("DERBY TABLES >>>>>>>>> ");
            while (resultSet.next()) {
                String strTableName = resultSet.getString("TABLE_NAME");
                System.out.println("TABLE_NAME is " + strTableName);
            }
            System.out.println("<<<<<<<<<<<<< END DERBY TABLES  ");
        } catch (ClassNotFoundException e) {
            e.printStackTrace();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }









Wednesday, April 16, 2014

Flex debugging timing out in Firefox

Recently I was debugging a project in Firefox and couldn't step through the code for more than a minute before I would get disconnected. 

Discovered this link

https://coderwall.com/p/fhhdla

Which suggests: 



  1. In your url bar type in "about:config" without the quotes.
  2. Locate the search field at the top of the document and enter: "dom.ipc.plugins.timeoutSecs" without the quotes.
  3. Double click on that item and set its value to what ever you'd like to increase the amount of time before it considers a plugin hung. 4 NOTE: if you change that value to -1 it should never time out.
And it seems to work for me, your mileage may vary. 


Monday, March 10, 2014

Udacity Online Training Review

Udacity Hadoop and MapReduce
I recently took the Udacity Hadoop and MapReduce online course, with the extra classroom / coaching option. The other interesting aspect was that a number of my coworkers at Twin Technologies  took the class at the same time.

I come at this course from a unique context. Not only am I a senior developer, but I'm also a university professor, a professional technical trainer, and my wife will have a a PhD in educational psychology in May. (This is relevant in our shared discussions about learning and trends).

I thought that this course was excellent. I liked that the videos explained the big picture, but there there was hands on, real world exercises to practice with. The multiple choice / multiple answer questions throughout the video were a little annoying, but I recognize the need for assessment if you are going to  be offering a certificate at the end.

I appreciated that the data sets were "messy", some being incomplete or in different formats, and that you had to add error checking to your code.  It was great to actually have to get in a do some real examples. I'm happy that the exercises weren't step-by-step, as I have found that most students I've had were pretty good at following directions, but after the exercise is over, they really didn't get anything from the "happy path".

One of the things that struck me as interesting, was since this class is mostly automated, that there is a "right" answer. This was useful, as it allowed for safe failures and fast feedback, both of which promote leaning. It was also frustrating, when in some of the exercises the answer that I got was not what the computer wanted, mainly because of an anomaly in the data.

For example, one of the exercises was to determine the most frequently accessed file, and how often had it been accessed. I found the file easily enough, but I was 4 off the total. The course software only told me I was wrong. I had to read through the discussion to determine that I was 4 off, and what the reasons behind it. Apparently, 99.98% of the files were relative paths in the log files that we were processing. But there was that .01% that had absolute paths. This meant that the file in question didn't match the specific string criteria that the others did, and thus ended up outside the total.

Overall this was a good lesson to learn, that the data might come in forms that are unexpected. However, I feel in the real world, missing that .01% would have been an acceptable oversight. Mostly I was frustrated at the extra time that the exercise required to determine this.

The coaches were readily available, and certainly had a good handle on coaching.... asking leading questions, listening, providing appropriate feedback without giving away the answers. That was nice to see, and I always enjoyed talking to them.

The exit interview was a novel idea. Obviously part of the interview is for you to prove you are who you said you are, which I'm assuming is for whatever accreditation Udacity is seeking. But the other interesting part is the feedback. Being an agileist, feedback is hugely important and good feedback is really difficult to come by. So I ended up talking to the interviewer for longer than he probably expected, but I offered up lots of ideas for improvement as well as my thoughts on the course (some of which are reflected here). One thing to note is that scheduling the exit interview is about 1 week away from the time of request. If you are seeking certification and no the subscription plan, you will want to account for that extra time.

It was fantastic to take this course with some of my co-workers. Of course we all worked at a different pace, but in a remote environment to be able to have a shared experience outside of our annual meetings and project work, is really valuable for team building. It was nice to be able to share our successes and pitfalls. I especially liked having other people checking in with each other to make sure we were all still moving along. I enjoyed feeling like I was helping when I could clarify a question or tell people to read the discussion or notes before starting on an exercise. It certainly was more personal and satisfying than posting to the forum.

My take away from taking this class, is that while not new information, I certainly prefer to be higher up on Bloom's taxonomy of learning with fast feedback, safe failures, and "messy" real world data / problems in my learning opportunities.

Next time, I would push myself to complete it a little faster so that I didn't incur a 2nd month charge of the subscription fee though.


Monday, January 27, 2014

Using Twitter for classroom queue management

Problem: 

I teach a hands-on multimedia class of 25-30 students. During lab times, the students often need help. I was finding that as I was helping one student, their neighbor would grab me as soon as I was done, and I could never get to the other side of the room.

I wanted a way to create a queue that made it fair to all students.

Constraints:


  • Students didn't need an account or access for it
  • Would always be available, so that if they were having a problem before class, they could be first in line
  • There was a mobile option, so I could check the queue while walking around the room. I didn't want to have to go back to the computer to determine who was next
  • I could get a notification if I didn't have the queue running on my machine
  • Students saw their location in the queue. 
  • It was easy, without many moving parts. This was something that I needed to stand up in a couple of hours and work reasonably well.  

Solution: Twitter as the backend


Twitter offered everything that I needed. 
  • Already managed and ordered timestamps
  • Already had access and notifications on my phone
  • Some students already had accounts, but a dev twitter account and simple custom interface would allow for anybody to access it
  • Search for the special hashtag, and the queue comes up. 
And as an added bonus, I got a chance to play with Node.js, socket.io, JQuery, jade templating, Heroku hosting and a bunch of other technologies which were new to me. 

The end result is that the students LOVE it. They recognize that it is fair, and that it is really easy for them to use at any time. 

Inspired by Google, my UI was super simple :


The tweet looks like "Drew Shefman needs help #uhmultimediaHelp @UHMultimedia 1390595370659"

  • Drew Shefman is obviously the first and last name. 
  • #uhmultimediaHelp is the hashtag that I've choosen. (University of Houston multimedia). This is ultimately my search term. 
  • @UHMultimedia is a DIFFERENT twitter account than the dev account. It is the twitter account I use for my class for announcements. I include this in the message, so that I will receive email notifications. 
  • The number at the end is the timestamp in milliseconds. I found that if a student asked for help twice, that it was not showing up a 2nd time, because twitter prevents a duplicate tweet in a short time period. 

So here is the flow: 

  1. On page load and before page display, twitter.search() for #uhmultimediaHelp
  2. Filter out  any tweets older than 5 hours (my class is only 3 hours long)
  3. Display results on page, showing name and formatted time (HH:MM)
  4. Student enters name and hits "I need help" 
  5. Socket broadcasts new entry and it gets prepended to the list element
    (this is primarily so that there is immediate feedback, and the impatient students know not to submit multiple times)
  6. twitter.updateStatus()
  7. On twitter submit success, repeat twitter.search()
  8. On search success, socket broadcasts a refresh with the resultant search list.
    (I perform this seemingly duplicate step of refresh so that if any students are using their personal twitter account, with the hashtag, they will get pulled into the queue. I decided to not use twitter.stream() api as it added a significant layer of complexity. The majority of my students opt for my interface versus using their accounts, so this is a reasonable solution for now. This decision will fail if the number of personal accounts outweighs this queue, as there is only a twitter refresh when a student submits through the queue. 
  9.  Then, starting from the bottom of the list, I can help the students in order. Woot!

Technical challenges: 


 This was my first foray into a decently complex HTML5  / Node app. So starting out *everything* seemed to be a technical challenge.  So outside of googling nearly every single line of code I was typing, here are the specific things that I ran into. 
  • I choose ntwitter as the node package to leverage for the twitter api. Unfortunately it had not been updated to Twitter's new 1.1 API. So I had to fork it, and modify it for that. You can get that version on https://github.com/dshefman/ntwitter. The change to the 1.1 was simple and you can see it at this commit
  • I tried using the twitter.stream API, but it requires a single instance. But each connection (webpage) was trying to create a new stream. After much researching, there was an idea to create two Heroku instances, one for the stream and one for the app. This was going to be too much work, so I took a different path.
  • On Heroku, I pushed up my repository without following the instructions, basically leaving out the init(). Boy it didn't like that. Basically had to delete my whole Heroku account and start over. 
  • My timezone is -5 from GMT, which is what twitter uses. Not a problem, until 7pm (my class is from 5:30 - 8:30pm). At 7pm CDT, that is 12am GMT (00:00:00), in which all of my displayed times switched to -5. It was a simple solution, but something I didn't account for when I was testing it during the day. 
  • Initially I didn't provide fast enough feedback, and the students would submit a half a dozen times before it would show up on their screen. Then they were embarrassed for submitting so much. 

Technologies Leveraged (Each was decently new to me):

  • CSS 
  • Jade templating 
  • JS
  • Jasmine BDD
  • Karma test runner (continuous mode is SUPER cool)
  • Node.js / Express
  • Socket.io
  • Twitter API via ntwitter
  • Heroku node hosting
  • JQuery

Usage example





Code:

You can find the project at:
https://github.com/dshefman/TwitterHelpQueue

There is one file missing: twitterAPICredentials.js
You will need to add your own API keys if you want to use the code.

Since I'm fairly new to all of these technologies, I gladly welcome feedback / best practices for improvement.  Thanks! 



Thursday, January 2, 2014

SCNA 2013 review



I attended Software Craftmanship North America (SCNA), this past November (2013) and wanted to share some of the major things that I got from this conference.

Software Craftmanship is a philosophy about developing software and raising the bar of the entire profession. The craftmanship manifesto augments the agile manifesto and adds that the craftsman also values well-crafted software, steadily adding value, community of professionals, and productive partnerships.

SCNA is the conference for raising the bar. It is a language / technology agnostic conference. It is a community of people who share the above values. The opening remarks of the conference are "Your annual attitude adjustment," which is absolutely true. I find the conference humbling and inspiring and everyone there cares about my growth as a developer. Thank you Twin Technologies for sending me.


The conference started with a reprimand from renown author and personal inspiration Robert Martin. His admonition was regarding the heathcare.gov debacle and how it's fallout has and will negatively effect our profession.

The take away from his presentation was regarding the "responsibility of knowing". Somewhere, some developer knew that heathcare.gov, wasn't tested and ready for production. Relating this to the Challenger explosion, Robert Martin asserts that we, professional software developers, MUST be able to stand firm when we "know" that something isn't ready.   A technology problem has now become a policy problem, where technology is preventing a legally created policy from happening. He cited some great references to the  Developer Bill of Rights and Client Bill of Rights.


Sara Gray, then presented a fantastic analogy regarding writing new worlds. Her statement is that you create the [programming] world that you live in, and are you thinking about the others (including your future self) that will have to live in that world. The analogy that she used was Harold and the Purple Crayon. You get to create your world and everything that you need, but sometimes you accidentally create monsters.

Sara asserts that like parts of a sentence, that there are parts of code. There are named things, like variable, parameters, and methods. There are pieces of knowledge, which should be extracted and then named. There are changes of state, which could be represented as "who changes state", and might be a good place for helper classes. Finally, there is the "speaking voice", which I believe is the overall flow.


Ken Auer says that you can't live on stackoverflow or github alone. The goal is to raise the bar, not win an arguments. When addressing poor code, the goal should be to raise the bar, to transfer knowledge in a constructive way, and not win the argument by proving your superior knowledge. He gave his mantra, which was "make it run, make it right, make it fast", but recognize that "make it right" is limited by the most skilled participant. Based on the Dreyfus learning models, the novice has to be "in sight" of the expert, in order for them to grow. One of the nuggets that I always look for at conferences is potential interview questions. Ken provided a good one. "What software have you shipped?"



One of the most thought provoking, and immediately applicable sessions was given by Corina Zona, entitled "Schemas for the Real World". Her assertion is that giving a form field a name or restricted answers limits its scope, and that you could declare a person invalid. People are never edge cases. Some examples would be a gender field of male/female; a sexuality field of hetero/bi/gay; drop down boxes for religion, race, or relationship status. The user often has to choose to be inauthentic to fill in these fields, saying "this isn't really me, but it is the only option". Why can't all of these fields be text inputs? What questions are we really trying to answer? Maybe the question that you are trying to answer, instead of asking for gender, ask "what pronoun do you prefer?" Based on her research, when given a free text option for gender, only 40% responded with "m, f, male, or female".


Here is another example of potential invalidation: Facebook's relationship status. Does it alienate people not in or not looking for a relationship? When does a "widowed" status change to "single"? How is the "Married, Separated, and Looking" status represented?

The cool thing about this session, was that that evening I had dinner with some college friends who work in admissions in higher education. We discussed that this was the exact thing that they *battle* with their IT group all of the time. It is very unusual to have a long lasting conversation about the things that I've learned in a conference with non-developers.

The other neat aspect of this, was that I was able to put it to use in the next form that I created. I'm organizing a parent / child dance for my daughter's school, and on the form, instead of having check boxes for mother and father, I have the following field: "Relationship of the attending adults to the child(ren)." This will hopefully give me the answers that I'm looking for... answers like "mother and father, step-father, mom1 and mom2, grandmother, etc". It provides a very rich data set.



The panel on software quality had some interesting insights. Does quality even exist? Given that it, quality, is an individual definition, it might not even be describable until you can see it or see its absence. Interestingly enough, is that quality has no value today. It has value tomorrow; it is something that you pay forward. Simplicity, which is often the sign of good quality, takes experience because it is hard. Testing is a good *tool* that might be an indicator of quality, but it is not a rule that tests mean high quality.





Dave Thomas, one of the Pragmatic Programmers,
talked about the unknown knowns and that we should teach people what we were taught, but infuse them with what we have learned. Given the matrix of what you know that you know, what you know that you don't know, what you don't know you don't know, and the what you don't know that you know; it is this last one, the unknown knowns that might be the most valuable. It is the cumulative integration of your experiences, and the critical piece which we have to try to share. A simple example to references is how do you recognize a face, or describe how you walk. Thinking about them actually breaks your ability to use the knowledge. The parts that we need to transfer, is why did you structure the code this way instead of that, or why does this code make you feel dirty?


Sandro Mancusio provided some insight into some of the criticism and rebuttal that the craftmanship movement has encountered. 

The first criticism is the "craftmanship is just XP rebranded". Craftmanship is an ideology not a methodology like XP. It is about principles not practices. Practices are chosen based on the value that they bring. 

"Craftmanship is an elitist movement." Actually, we (the crafts-people) are fully inclusive, trying to raise the bar across the industry. We recognize and support the need of novices. We NEED them for everyone to grow. 

"First crafted code, then whatever the client wants." We practice writing quality code, so that time is not a factor to deliver quality. Quality becomes inherit in our work. 

"Pragmatism over religion." The message is professionalism, it is not TDD or practices. "What does it mean to be a craftsman", is not universally definable, but a personal definition. How it is done is as important as having it done. 

One thing that struck me as particularly powerful, was his mentor's description of his first attempt at code as "disrespectful". Out of all of the adjectives available to him.... "bad, inefficient, ugly, unmaintainable", he choose disrespectful. That is quite an interesting context. 




On the nature of software development, by Ron Jefferies and Chet Hendrickson, they assert that most agile teams have it wrong. Coding is a team sport, and that everyone on the team should be somewhat capable of "scoring" even if that isn't their primary duty. Backlog refinement shouldn't be the sole job of the product owner, but a team problem-solving effort. The whole team needs the vision of the product, the "what are we building". The team is actually the product owner, and the product owner is technically the "product champion" 



Finally outside of what I've already written about, there were some notable quotes that I wanted to share. 
  • Trust-driven development. How transparent can you be to build trust? 
  • "Does my commit increase or decrease the entropy of the system?"

Oh and one last thing. I was able to participate in a code retreat at 8th Light. I'm very envious of their bookshelf.