Thursday, November 15, 2012

Always linking Amazon to the school rewards


Our elementary school has a link off the PTO website, where proceeds from the sale on Amazon will be donated to the school. That's great, but I always forget to click the link first before I go shopping.

I checked with Amazon tech support and the feature to save / manage that within my account is not a current feature (although I requested it).

So now what.

Chrome Extensions: Redirector

This will take a url, like www.amazon.com, and redirect it to the one that has the school links.
Note, that I'm sure there are similar tools for the other browsers, but this one is for Chrome.

Here are the steps that you will need to follow to set this up.
  1. From Chrome, download Redirector 
  2. Click on Rules Manager
  3. Click on the red plus (+) next to the word "Name"
  4. There are four text fields:
Name: (Whatever you want)
Amazon to KolterAmazon


Match
www.amazon.com


Substitution
(www.amazon.com)/?([^\?]*)\??(.*)


Replacement
$1/$2?$3&tag=kolelepto-20&camp=212677&creative=384117&linkCode=ur1&adid=0EN8T40BVQFZ0NCS99MQ&


What this will do is it will find
  • www.amazon.com
  • www.amazon.com/
  • www.amazon.com/anything_else_without_a_question_mark
  • www.amazon.com/anything?anything
and it will append the school information on the end. 
The school information is the part that says:
&tag=kolelepto-20&camp=212677&creative=384117&linkCode=ur1&adid=0EN8T40BVQFZ0NCS99MQ&

And now, whenever you navigate to www.amazon.com, it should automatically think that you clicked on the support the school link. 

Friday, October 26, 2012

On Being a Senior Engineer, Estimating

So, I'm moving from being a freelance consultant to a "technical lead". While I feel that I have a lot of skills in that area already, I'm always looking to grow.

So with that I'm going to be adding some posts about what it means to be a technical lead.

Also, I'm thinking about designing a presentation around estimating, because that is something that I want to learn more about. So I'm also going to be adding some estimation research up here.

And of course, other technical discoveries, AS3, flex, and others will always be here.

So I just came across this blog, On Being a Senior Engineer which I think is fantastic, and here are the bullets:

  •  Mature engineers seek out constructive criticism of their designs.
  • Mature engineers understand the non-technical areas of how they are perceived.
  • Mature engineers do not shy away from making estimates, and are always trying to get better at it.
  • Mature engineers have an innate sense of anticipation, even if they don’t know they do.
  • Mature engineers understand that not all of their projects are filled with rockstar-on-stage work.
  • Mature engineers lift the skills and expertise of those around them.
  • Mature engineers make their trade-offs explicit when making judgements and decisions.
  • Mature engineers don’t practice CYAE (“Cover Your Ass Engineering”)
  • Mature engineers are empathetic.
  • Mature engineers don’t make empty complaints.
  • Mature engineers are aware of cognitive biases


And some of the points about estimation:
Estimation is really about responsibility

(Quoted directly from the blog)

From the Unwritten Laws:
Promises, schedules, and estimates are necessary and important instruments in a well-ordered business. Many engineers fail to realize this, or habitually try to dodge the irksome responsibility for making commitments. You must make promises based upon your own estimates for the part of the job for which you are responsible, together with estimates obtained from contributing departments for their parts. No one should be allowed to avoid the issue by the old formula, “I can’t give a promise because it depends upon so many uncertain factors.”
Avoiding responsibility for estimates is another way of saying, “I’m not ready to be relied upon for building critical pieces of infrastructure.” All businesses rely on estimates, and all engineers working on a project are involved in Joint Activity, which means that they have a responsibility to others to make themselves interpredictable. In general, mature engineers are comfortable with working within some nonzero amount of uncertainty and risk.

And there is a cognitive biases (one of my weaknesses) around estimating

Planning Fallacy – (related to the point about making estimates under uncertainty, above) basically: being more optimistic about forecasting the time a particular project will take.



Thursday, October 25, 2012

Remote Debugging a swf on Android

So I recently had a very thorny challenge of debugging on android.

Because of business / technical reasons, we HAD to serve the swf via HTML through a specific server. The problem that I was having was that the at full screen the video went blank, although we could hear the audio.

Now just as some background, the project was inherited and while there was a decent MVC architecture in place, it was possible that any element on the screen had access to any other element and could modify its properties.... which happened quite a bit.

My challenge was to try to figure out how to get *some* information out of the device so I could have a clue as to what was going on.

I tried to reverse tether and proxy, but neither of those I could do without rooting, which I wasn't willing to do.

Then I tried setting up a hotspot from my laptop (http://www.connectify.me/) to see if I could get Charles in place to then do a local mapping which I could leverage for remote debugging in the IDE, but my version of Android didn't like ad-hoc networks.

Finally I discovered that Monster Debugger had a P2P version. This version allowed me to run demonster on my desktop and the app on the device and as long as they were on the same wifi network, I could get some telemetry. ROCK ON!

So I created a MonsterTraceTarget to go with my logging, and I got trace statements. I could also drill into the visual hierarchy and validate and set properties.

After much searching I found the video element about 16 levels deep with a width and height set to 0,0. This explains the audio but no video. Traces confirmed that somewhere in the code we were hitting the video api (not AS3, but a wrapper to the 3rd party video provider) and actually setting these crazy properties.

"Find Uses" (thank you IntelliJ), led me to half a dozen places where this could happen. More trace statements to determine which one was the culprit. Iterating on the "find uses" and trace statements, I finally discovered that it was an omission that had happened *months* earlier (so much for test early, test often), where there was a conditional test within the configuration startup for touch device and full screen (duh), where the stage rectangle was supposed to be calculated, but wasn't - hence the zeros.

Huge thanks to Demonster and IntelliJ for the tools needed to solve this 2-days-worth-of research thorny issue.


Tuesday, July 10, 2012

Column Spanning with Flex Spark DataGrid

This post is long overdue, but this was something that I was playing around with for a potential requirement for a project many months back.

So here is the problem: Can we get a column span happening on the spark data grid. Yes.

Either scripts and active content are not permitted to run or Adobe Flash Player version 11.1.0 or greater is not installed.

Get Adobe Flash Player


The link to the source code is here: https://github.com/dshefman/FlexSparkDataGridSpannableColumns


And here are the key steps

1) Setup a new Skin with an added GridLayer (in this case called "rendererOverlayLayer") as the last entry
ColumnSpanningSparkDatagridSkin.mxml
<s:Grid id="grid" itemRenderer="spark.skins.spark.DefaultGridItemRenderer">
                        <s:GridLayer name="backgroundLayer"/>
                        <s:GridLayer name="selectionLayer"/>
                        <s:GridLayer name="editorIndicatorLayer"/>                            
                        <s:GridLayer name="rendererLayer"/>
                        <s:GridLayer name="overlayLayer"/>
                        <s:GridLayer name="rendererOverlayLayer" />
                    </s:Grid>

2) Create your custom cell renderer, extending ColumnSpanningGridItemRenderer.as
3) Create your custom spannableItemRendererAccessor, implement ISpannableGridItemRenderer.
Within this class, you will need to fill out the following interface:
public interface ISpannableGridItemRenderer
 {
  function getElementThatSpans():UIComponent
  function getSpanningRendererLayerNameInDataGridSkin():String
  function getNumofSpannedColumns(data:Object):int
  function doesDataSpanColumns(data:Object):Boolean
  function isCurrentCellHiddenBeneathASpan(data:Object,columnIndex:int):Boolean
 }
These methods are used by ColumnSpanningGridItemRenderer to determine and relayer the span based on the data that comes in
4) Attach your spannableItemRendererAccessor to the cellRenderer within preInitialize. (Note, if you attach it during creationComplete, you will need to force an update before it takes effect.)

How it works:

The ColumnSpanningGridItemRenderer does all of the heavy lifting. It checks to see if the data has spanning enabled. If it does, then it reparents the itemrenderer to the rendererOverlayLayer within the skin and resizes it to fit the cell bounds defined. If it doesn't it reparents the itemrenderer back to the original rendererLayer and resizes back to the original size.
There is a tricky part within the code, as I discovered, renderers are not added / removed from stage, but instead their visibility is toggled.



*** Tight Coupling Warning ****

Within the ISpannableGridItemRenderer, I expect that you will need implicit knowledge of the data / datatype coming in. Please remember that the data that is fed in is the rowData, and you will likely need to convert to to cellData to figure out individual spans. This could be through separate helper / utility classes. 

My example is a little extreme, as it is unlikely each cell would be comprised of its own value object. But it was easiest for this post. 

Anyways, this is purpose of the "convertRawDataToSpannableData()" method. 
Then once you have your cellData, you will need to determine if it spans via "doesDataSpanColumns()" which is probably related to "getNumofSpannedColumns()"

That's the easy part. The hard part is determining that the following cells are hidden beneath the span. To make this a little easier, if there is some condition within the cell data that indicates that they are hidden, it isn't too bad (null / empty values/ constants). Otherwise, you might have to do some preprocessing of the data to compare expected column indices to actual column  indices  and base your conditionals on that. 

Thursday, June 14, 2012

PureMVC, amendments to best practices

I recently inherited a very well written PureMVC project, following the recommended best practices to the letter - and it gave me a lot of headaches. So I'm making some amendments.

First let me give some context. This project was a video player project, the client orginally had their own in house video player, (ie NetStream based), but are now moving to a service for their videos. They wanted to keep the UI the same, just switch out the video player.

This is perfect, this is exactly what an MVC framework should be about, I could keep the view part and just replace the video player and model to where the data was coming from.

While trying to decode the source code, I started by just seeing if I could instantiate just one of the mediators, to see what the view looked like. Compiler Error.

Turns out, that the mediator had referenes to the ApplicationFacade to get the notification names. Well that means that the entire ApplicationFacade had to be compiled. That meant that all of the registered commands, mediators and proxies, had to be compiled. So somewhere deep within the framework, I was missing a bunch of classes.

So my first step was to comment out all of the proxies and commands, as I only wanted the views. Still no joy. Not wanting to remap all of the notifcation declarations, I made a MediatorNameConstants. This file had a list of all of the mediator names, but without references to the mediator classes. I updated all of the NAME values of all of the mediators to point to this file as well as all of the references.

Now when I wanted a mediator, I only needed that mediator and the facade, not all of the framework.

As I worked through the code and added stuff back in here is the list of best practices regarding PureMVC that I came up with.


  • While tedious, keep all mediator and proxy names in a separate constants file.
    I did this so any mediators or proxies that were referenced in mediators, weren't required to be compiled in, only their names.
  • Then within a mediator, command, or proxy, instead of creating local variables for other mediators or proxies, declare them as explicit getters
    Like this:
    public function get myProxy():IMyProxy { return facade.retrieveProxy(MyProxyConstants.NAME) as IMyProxy) }

    That way the class can be overwritten for extension and testing. This would also allow for some dependancy injection, if so desired.
  • Make the proxies into interfaces, at least the major ones, like ApplicationProxy or ConfiguationProxy.
    Now there is a theory that says if you only have one, than an interface isn't needed, and I agree. Although I think for something that is major and widespread like the two mentioned above, that an interface of one does make sense.

If these things were in place in my inherited project, it would have saved me days worth of debugging and code searching efforts. 

Friday, May 11, 2012

Any color "sepia" using Adobe PixelBender

I wanted to entitle this as "Personally proud moments in programming", but I didn't think that I would get any traffic on it. But this is one of my most proud discoveries.

About 4 years ago, I was on a project where the requirement was to custom tint photographs. The client wanted not only sepia tone, but many different colors, like yellow or green.

Many of the pathways that I tried resulted in images that would look ok only if they didn't have the target color, but if they had green, for example, they would blow out and be just awful looking.

Now, I know what you are thinking, why was this hard, convert to greyscale and then add a transparent overlay and you should be good to go... wrong... it looked awful. Try it, you'll see... awful.

Then I discovered the YIQ color space. This is the color space used by original television broadcasting to get to black and white TV. This was perfect because once it was black and white it was easy to add the color.


So I found this formula (sorry to whomever I originally found it from, but I found it again, it WAS 4 years ago). I copied the code out of the link so you don't have to follow it, but I wanted to give credit to *somebody*


RGB to/from YIQ
The YIQ system is the colour primary system adopted by NTSC for colour television broadcasting. The YIQ color solid is formed by a linear transformation of the RGB cube. Its purpose is to exploit certain characteristics of the human visual system to maximize the use of a fixed bandwidth. The transform maxtrix is as follows




Y
I
Q
 = 
0.2990.5870.114
0.596-0.274-0.322
0.212-0.5230.311
R
G
B

Note: First line Y = (0.299, 0.587, 0.144) (R,G,B) also gives pure B&W translation for RGB. The inverse transformation matrix that converts YIQ to RGB is




R
G
B
 = 
1.00.9560.621
1.0-0.272-0.647
1.0-1.1051.702
Y
I
Q

So what this looks like in pixel bender is:

kernel colorSepia
<   namespace : "com.squaredi.colorutils";
    vendor : "Drew Shefman";
    version : 2;
    description : "a variable color sepia filter"; >
{
    parameter float intensity;
    parameter float destColor
    <
        minValue:-2.0;
        maxValue:2.0;
        defaultValue:0.0;
    >;

    input image4 src;
    output float4 dst;

    // evaluatePixel(): The function of the filter that actually does the 
    //                  processing of the image.  This function is called once 
    //                  for each pixel of the output image.
    void
    evaluatePixel()
    {
        // temporary variables to hold the colors.
        float4 rgbaColor;
        float4 yiqaColor;

        // The language implements matrices in column major order.  This means
        // that mathematically, the transform will look like the following:
        // |Y|   |0.299     0.587   0.114   0.0| |R|
        // |I| = |0.596     -0.275  -0.321  0.0| |G|
        // |Q|   |0.212     -0.523  0.311   0.0| |B|
        // |A|   |0.0       0.0     0.0     1.0| |A|
        float4x4 YIQMatrix = float4x4(
            0.299,  0.596,  0.212, 0.000,
            0.587, -0.275, -0.523, 0.000,
            0.114, -0.321,  0.311, 0.000,
            0.000,  0.000,  0.000, 1.000
        );
        
        // Similar to the above matrix, the matrix is in column order.  Thus, 
        // the transform will look like the following:
        // |R|   |1.0   0.956   0.621   0.0| |Y|
        // |G| = |1.0   -0.272  -0.647  0.0| |I|
        // |B|   |1.0   -1.11   1.70    0.0| |Q|
        // |A|   |0.0   0.0     0.0     1.0| |A|
        float4x4 inverseYIQ = float4x4(
            1.0,    1.0,    1.0,    0.0,
            0.956, -0.272, -1.10,  0.0,
            0.621, -0.647,  1.70,   0.0,
            0.0,    0.0,    0.0,    1.0
        );

        // get the pixel value at our current location
        rgbaColor = sampleNearest(src, outCoord());

        yiqaColor = YIQMatrix * rgbaColor;

        // Here we set the I value of the YIQ color to the intensity
        // specified in the UI.  
        yiqaColor.y = intensity; 
        // zero out the Q to apply the sepia tone
        yiqaColor.z = destColor;

        // convert back to RGBA and set the output value to the modified color.
        dst = inverseYIQ * yiqaColor;
    }
}

Which in using can give you images like this:
 
Original

 
DestColor = -0.16

DestColor = -0.24

DestColor = -0.3


Not bad, eh? (That's me, publicly patting myself on the back :) from 4 years ago )

Tuesday, April 17, 2012

Refactoring Responsibly - 360Flex Preso

Here is the slide deck (pptx) from my 360Flex refactoring preso.... Refactoring Responsibly

If you want to see the embedded version without the cool animations:



There we also some questions that I'll do my best to remember and try to answer again.

Q: Did you come up with the "First Draft" idea for coding / programming?
A: As far as I can tell, I couldn't find any other references to it on my searches. It was an idea that developed as a result of trying to sell upper management on the idea that we needed to refactor.

Q: What about refactoring to improve performance?
A: Performance is a whole different animal. If you are looking to get major performance gains, you might need to do more re-writing than cleaning up. You will probably want functional tests to be your characterization tests. And yes, the characterization tests add more function calls, which will probably cause *some mild* performance degradation. But then the choice is readable maintainable code, or performant code. Usually these things are orthogonal. As a side note, once you have tests in place, you have an open field with high confidence of what you can do. If you redesign, get better performance, and the tests still pass, then awesome. (And if you do redesign and the tests still pass, welcome to test driven design :) it is a great thing)

Q: What if I come back and do my second draft the next day, do I still need tests?
A: You are testing your code in some way. Whether it is manually executing the actions or running automated tests. If you write your 2nd draft the next day, that is a different level of refactoring then I'm talking about. I'd call that a good idea on the code that you were creating. You aren't in production yet, you are adding a feature or fixing a bug.... you are still developing and while working on the problem, you thought of a better way to do it. Clearly however you verified that your code worked yesterday is still clear in your mind, and you can easily verify that it works today. I'm a TDD and automated test advocate, so I'd say yes you need tests, but if that is not part of your philosophy, then it is up to you to decide the maintainability of your code

Q: Doesn't refactoring admit that you screwed up the first time.
A: NO! As the Refactoring Prime Directive states : "Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand. "

Q: Could you slow down next time?
A: I certainly will ;)


Friday, March 23, 2012

You've got an ugly baby: How to "sell" refactoring

Look, nobody likes to be told that they have an ugly baby.

If you are new to an existing corporate project, you are fairly likely to find a code base that could use some improvement.

If you happen to be fortunate enough to have the project manager tell you that they have amassed significant technical debt and need to work on reducing it, then you don't need to read the rest of this post. Attend my 360Flex presentation or check back here for my slide deck in a couple of weeks to learn how to refactor responsibly.

But for the rest of us, telling the client that the code base is not usable and needs to be massively cleaned up, is like telling a parent that they have an ugly baby. They won't hear you.

I've tried using the reduces maintenance costs / increases bug fixing time arguments too. And frankly, when faced with a looming deadline, some clients have outright said "I don't care about maintenance."

Now I disagree. If you are working on a marketing campaign that has an expected 3-4 week life.... fine. Get 'er done and life is good. But when you are working on a project that has an expected life of years, or even in this case a decade or more... then it is insanely short sighted to not care about maintenance.

Clearly, timing is important... Don't propose an immediate refactoring initiative a month before a major deadline. Do however propose it for immediately after the deadline.  From my experience, if you are not "agile", then your deadline is a major milestone... maybe even a release. This could mean that there will be a period of QA testing, acceptance testing, approvals, etc. For us, this was a time that the developers were supposed to work on documentation, as the code was "frozen". (I disagree with both of these statements).

This is the perfect time to refactor. If you need to "document" your code, then you need to refactor. Well written code is self documented. I'd also argue that writing unit tests are the best form of documentation that you can have.

But I digress, this post isn't intended to be about refactoring, it is intended to be how to talk about refactoring so that business will listen.

If you find that concepts like refactoring, technical debt, or maintainability are not working for you in terms of buisness buy-in, we've had some success with some others.


  • "Future BIG IDEA" enablement



For us, it was the "new backend framework enablement initiative". This "sold" incredibly well. We were granted a 5 week sprint for enablement (refactoring)

It was during this "enablement" sprint, that this next idea really proved itself.

  • Training / ownership


In my case, I'm a consultant. I have two main responsibilities: Help the team with some new code, and train the (soon to be maintenance) team. During the refactoring iteration, the team learned more about their code then I could ever deliver myself. No matter how many presentations, diagrams, code reviews, etc. I gave, nothing would compare to getting the team to improve their own (team's) code.


As a developer, refactoring is wonderful!

  1. The requirements are perfectly clear (do exactly what it is already doing). 
  2. The end state is self defined (I can stop refactoring at any time that I've made any level of improvement, assuming that nothing has changed).
  3. Identifying what  to change is clearly describable to any level developer ( code smells )  
Actually, the results of this sprint were amazing! And I'm obviously not talking about functionality, as true to our intent -- nothing changed. But now, the transformation in the team was astonishing.
  • Junior developers had dozens of light bulbs / ah - ha moments regarding their own and future coding standards. 
  • Everyone felt responsibility and ownership in the code base, which previously was more a lay blame / obligation. (see responsibility model )
  • We all had fun. It was unanimously voted the" best time" in all of the project's duration
  • The team of developers had transformed into the "team" (performing, in the Tuckman's_stages_of_group_development)

So if you need it... it is no longer "we need to rewrite this code". 

Let, "we need to increase team ownership by enabling [The next big thing], before we start the next phase." -- be your next refactoring battle cry.

  








Wednesday, March 14, 2012

Refactoring and Respecting the Team

So I'm working on my 360Flex presentation and I realized that there was a very important lesson that I've learned about refactoring and team respect that is unlikely to make it into the actual presentation.

So if you are not going to read the whole backstory coming up... here is the quote from the retrospective prime directive that illustrates my point perfectly:
“Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.”

And now for the back story.

A couple of years ago, I was brought onto a team as a senior flex developer. One of my first tasks was to mentor the team in best practices. The novice flex team had been writing flex code on their application for about 6 months and, as might be expected, there were lots of areas for improvements.

But there was one section of code that I came across, which absolutely blew me away. I couldn't believe how bad this code was.

There was a view component that created an alert. Within the alert callback, within the same view component was a switch statement. This switch statement evaluated the width(!) of the alert to determine the business logic of what should happen next.

It looked something like this:
switch(alert.width)
{
    case 341:
       //do approval workflow
       break;
    case 354:
       //do normal workflow
       break;
    case 362:
        //do cancel workflow
       break;
}
As I said... I was completely flabbergasted. I couldn't imagine how anyone, novice or not, with any coding experience, in any language, could think that this was a good idea. Didn't they realize that width's are arbitrary and volatile?! And this was critical business logic, nevermind that it was contained in the view, but business logic based on width!

Clearly this still agitates me.

So I created my best-practices presentation, with this code as my shining example of bad code and the need to refactor. I was feeling particularly proud of myself and how much value I would be adding to the team, as if to say -- if this is the code that they were writing, I could bring them up many many levels.

So I showcased this example, hand waving madly why this was so bad. Nearly immediately, the technical architect / my mentor / team lead, stood up and proclaimed that this was his code.  My first thought was "clearly this isn't his code, this is the code of a total..... oh *$#^&, someone in *this* room wrote this code that I'm totally tearing apart."

So after lots of backpedaling, apologizing, focusing on better ways of organizing the code in general, and blaming the tech architect for such bad code <wink>, the presentation was favorably received.

But, boy, was that a slap in the face for me... and certainly a lesson that I'm not going to forget nor repeat.

It wasn't until just a couple of weeks ago that I found the above retrospective prime directive, but it perfectly sums up the philosophy that was thrust upon me in that moment.

That moment that completely altered my course.
That moment of supporting the *team* first.
That moment of respect.



Tuesday, February 28, 2012

StageText and SkinnablePopupContainer

After much head scratching, I discovered that on the mobile AIR SDK, that TextInput wraps StageText. This is cool, because we can use native keyboard handling. But you have to watch out, if you create a popup, you could have your stage text on top of your popup. The solution - ensure that all popups extend SkinnablePopupContainer, and life is easy. Of course this is what Adobe recommends, but to my knowledge they didn't say why. :)

If you are creating custom popups and/or using external libraries make sure you can verify the object hierarchy and that SkinnablePopupContainer is there.

I write this because I found a mobile UI library that I was playing with and it didn't, and I had to spend time debugging. Since the search results did not provide and obvious solution, hopefully this post helps someone else if they come across the issue where the TextInput is always on top regardless of the z-order depth.  

Tuesday, February 14, 2012

TDD is like losing-weight-hard & fails with converging prototypes

First off, I'm trying to develop better TDD habits, and changing habits is hard.

It is very related to losing weight. It is easy to get to the gym / eat less for a week or two and you are feeling great. You are a losing-weight-TDD rockstar. Then you hit that stressful moment(s) and all of a sudden you are back to old habits. But you are think that as soon as the stress passes, you'll get back to the gym / do some TAD (Test After Development), except that it is way harder to get started again.

I've also come to realize that there are times when TDD is very appropriate and times when it is not.
TDD is a lifesaver when you have clearly defined goals / user stories. You write your test to meet the acceptance criteria, write your class, and you are golden.

But when your goals are nebulous, like the client's core reason for a project keeps getting redefined, then TDD gets in the way. If I were to name this, I'd call this  "converging prototype" development. This is where they don't really know what they want, nor even the specific business problem they are trying to solve. They have a general pain point, or new idea, but it isn't flushed out (that is your, the developer's job). This is where you have a just in time backlog, and you are working on 1 week iterations. At the demo at the end of each week, the current application / prototype is really a "conversation starter" rather than a releasable item. Here is where TDD is not useful, as there is no goal. What are you testing?

Sure you could write tests for what you had, but at the moment it is still a prototype, or proof of concept. Core ideas are still highly vulnerable to massive changes. And designing / developing / AND testing in this phase sucks.

At some point, 3-4 iterations later, it is likely (hopefully), starting to converge on ideas / workflows / datatypes (yes, even datatypes couldn't be well defined initially). So now that we are converging, now is a good time to do some TAD.

Given it is not ideal, but testing after at least gets you tests, which is way better than no tests.
 This is also a good time to clean up / redesign where you are converging. Notice that I didn't say refactor, we are not necessarily trying to retain all of the functionality. We are still talking about a prototype.

We know that in another iteration or two, this prototype is going to cross over to be the actual app, but for now, we can be a little bit flexible with the "requirements".

In my commitment to more TDD, I've found that it is project specific goal. On my well defined project, I have 100% TDD and I LOVED it. On my poorly defined project, I started with about 80% TDD, and rapidly  dropped to 0%. The project has had 3 major direction changes, and the tests and codebase were nearly 100% irrelevant after each change.

I guess my next discovery is how well I write testable code without tests :)




Thursday, January 19, 2012

Spark Data Grid - Precision Focus Control - preso

So I'm re-presenting my Spark data grid presentation that I did a MAX 360 flex to the RIA5280 Denver users group.. I've updated the deck some with additional information that I've learned over the last couple of months.

So here is the most recent slide deck.

and the same blog references to the source code.

Thursday, January 12, 2012

Memopal automation

So I use Memopal as my cloud based backup service. I like them because multiple computers can back up to a single account automatically. That's really its best feature. It lacks a public api and the web interface is a little clunky (like once you start running out of space and trying to figure out what to delete).

So I found that I had ~400 Itunes backup files being stored at 30MB each. And the 3-click per file + several seconds to delete  wasn't going to work for me. And there was no public API that I could code against, especially with session based authentication.

I couple of firefox plugins and a little hand editing in a text editor and 400 files, worth 12G gone in a few minutes.

Here are the details.


  • Install IMacro for firefox for the automation
  • Install Link Gopher for firefox
  • Navigate to the folder with the files that you want to delete
  • Use link gopher to get a list of all of the links on the page. 
  • Copy the resultant html page and paste into a text editor
  • Remove any files that you want to keep
  • find "https" -> replace "URL GOTO=https"
  • find "get?" -> replace "delete?"
  • select all and copy
  • create a dummy macro via record.
  • edit the macro and paste
  • play back the macro. 
It is the little things in life that are worth celebrating (and blogging).