Rockets before Rovers: The Agile Moon Landing Project

January 25, 2014 3 comments

When a pack of Software Engineers met in Utah in early 2001 to discuss a new way of building software, the summit did not produce a brand new SDLC, but rather a simple set of guidelines for software teams to follow in order to achieve better results with less friction. This ‘Agile Manifesto’ was as much a guiding force for future software development as it was an indictment of the processes that continue to plague the industry. As simple as the guidelines are, they can be profound when applied properly:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over Processes and tools

Working software over Comprehensive documentation

Customer collaboration over Contract negotiation

Responding to change over Following a plan

That is, while there is value in the items on the right, we value the items on the left more.

I find it informative to invert the guidelines and ask ourselves what it looks like to build software badly. If you want to build it the wrong way, you’ll define the process and choose tools first, write a whole bunch of documentation and requirements before you write any code, negotiate a detailed client contract but ignore those clients while you’re writing the code, and stick firmly to your plan regardless of what happens along the way. As horrible as that sounds, I’m afraid it also sounds pretty familiar to anyone who’s been the industry for a while.

As simple and straightforward as those guidelines are… and I do believe they should be at the heart of any software development exercise… they do not prescribe any particular SDLC to achieve those results. In fact, that would be quite contrary to the first guideline!

In practice, however, all ‘agile’ projects are run as some form of iterative, or incremental, process. And that process is not a new one. Many people believe that ‘Agile’ is some new, cutting-edge software process and that moving to this new process is somehow revolutionary and, perhaps, risky. As a result, I think there’s a lot of resistance to those changes.

In truth these iterative approaches have been around for quite a long time, going back as far back as William Deming in the 1940’s pushing his Plan-Do-Check-Act cycles for improving the automotive industry. NASA’s Project Mercury, the opening project in the ‘space race’, as well as Project Gemini and the moon landing all followed incremental development cycles. Project Mercury did half-day iterations with test first development! Not such a new idea after all, just an industry that’s been very slow to catch on.

So, what does it really mean to do iterative development? Well, very simply, it means that NASA didn’t get hung up on designing an anti-gravity pen to allow astronauts to write in space before they even had a rocket to get them there! They knew, first things first, they better figure out how to make a reliable rocket capable of putting something in orbit, first… and that’s where they started.

Rockets before Rovers

And that’s the absolute heart of agile development. You don’t need a big fancy process with multiple steps, regular meetings, status updates, burndown charts, etc… if you do a little bit of planning, build your features incrementally, talk to each other and your clients, and make changes whenever necessary… then Congratulations, you’re doing agile software development! I see far too many groups getting all hung up on doing all the little details of Scrum process, or Extreme Programming or Kanban… all of those things are great but you don’t need to start with all that.

Go ahead and read about Scrum and Kanban… it will be helpful eventually, but you really need to start simple. The big secret is that you should design your agile process using agile methodology! Start with the core (iterations and demos), and end each iteration with a retrospective meeting to see what’s working and what’s not and fix it. Be patient, and your process will improve week over week. Plan your projects carefully from the core outwards and always remember: Rockets before Rovers!


Related Links:

The Power and Simplicity of the Data Warehouse

Originally posted on Fortigent:

“In many ways, dimensional modeling amounts to holding the fort against assaults on simplicity”
– Ralph Kimball, The Data Warehouse Toolkit


Although there are many reasons that an organization may consider building a “data warehouse”, most of the time the overwhelming reason is performance related… a report or web page is just taking too long to run and they want it to be faster. If that data could be pre-computed than that page would be really fast!

I’m here to tell you that speed is not the most important reason for building the warehouse.

When you’ve got a system where multiple consumers are reading the data from the transactional store and doing their own calculations, you create a whole bunch of other problems beyond just the speed issue:

  • You create the potential for multiple different and conflicting results in that system. At least a few of those consumers will inevitably…

View original 587 more words

The coolest SQL Server function you never heard of

Ever heard of the SQL_VARIANT_PROPERTY function? I didn’t think so.

SQL Server developers very often make the mistake of making their NUMERIC fields too large. When faced with a choice of how to size the column, they’ll often think “make it way larger than I need to be safe”.

This works OK as long as you simply store and read these values, but if you ever have to perform math with these columns, particularly some form of division or multiplication, you may find your results mysteriously losing precision.

This is because SQL Server can only store a maximum of 38 digits per number… if the results of your mathematic expression may yield a number larger than that, SQL Server will be forced to downsize it and remove digits from the mantissa as a result.

For example, let’s say you are dividing two NUMERIC(30, 10) numbers as follows:


declare @x NUMERIC(30, 10) = 10.0
declare @y NUMERIC(30, 10) = 3.0
declare @result NUMERIC(38,10)
set @result = @x / @y
print @result

Your result, even though you were hoping for 10 digits of precision actually gives you 3.3333333300… only 8 digits.

What happened?

Well, When you divide two NUMERICs with precision of 30, you can end up a result too large to store… SQL Server is forced to shrink the right side of your result to accommodate a maximum size of 38 for the result.

For division this is a relatively nasty determination. The nitty-gritty algorithm of how this works can be found here: http://msdn.microsoft.com/en-us/library/ms190476.aspx.

A very useful function for figuring out what your result will yield is the SQL_VARIANT_PROPERTY function. You use it as follows:


declare @x NUMERIC(30, 10) = 10.0
declare @y NUMERIC(30, 10) = 3.0
declare @result NUMERIC(38,10)
set @result = @x / @y
print @result
select SQL_VARIANT_PROPERTY(@x / @y, 'BaseType') AS BaseType,
 SQL_VARIANT_PROPERTY(@x / @y, 'Precision') AS Precision,
 SQL_VARIANT_PROPERTY(@x / @y, 'Scale') AS Scale

And the results look like this:

sql_variant_property

The SQL_VARIANT_PROPERTY function can tell you details about the type derived from your expression. Here we can see that it reduced the scale to 8 which is why I’ve lost two digits of precision.

You’ll need to reduce your base types or cast them before doing the math to have enough room to get the desired precision in this case.

Documentation on SQL_VARIANT_PROPERTY here:

http://msdn.microsoft.com/en-us/library/ms178550.aspx

Details on Precision and Scale here:

http://msdn.microsoft.com/en-us/library/ms190476.aspx

Win a D800 or 5D Mark III

Cool contest to win a Nikon D800 or Canon 5D Mark III.

Sponsored by snapknot.com which is a website that allows people to find wedding photographers.

Big thanks to the SnapKnot wedding photography directory for offering this great camera giveaway!

 

Reading from Snapshot Databases With Multiple Tables

March 21, 2013 1 comment

If you haven’t explored using Snapshot isolation in SQL Server, I recommend you give it a look. A snapshot enabled database allows the reader to get  a clean read without blocking.

Prior to this capability, the only way to guarantee a non-blocking read from the database was to sprinkle NOLOCK statements all over your queries. Clearly this is a bad idea because you’re getting a dirty read, but it’s really much worse than that.

Enter Snapshot isolation… When querying using Snapshot isolation, your query will read the state of the rows at the moment the query begins, prior to any outstanding transactions against those rows. This allows you to get the last known committed state of those rows (clean data) without having to wait for outstanding transactions to commit. This is critical behavior if you want, say, a website that doesn’t have slow loading pages.

Now, it gets interesting when you’re trying to read multiple tables in one batch from your Snapshot database. Let me show you an example.

Start with two sample tables:

create table t1 ( id int identity(1, 1), value varchar(50))
create table t2 ( id int identity(1, 1), value varchar(50))
insert into t1 values ('Test1')
insert into t2 values ('Test1')

Now, setup a set a couple of update statements, but don’t execute them yet:

-- Update statements
begin transaction
update t1 set value = 'Test2'
update t2 set value = 'Test2'
commit transaction

In another window, setup a read as follows:

-- Select statements
set transaction isolation level read committed
begin transaction
select * from t1
-- create some artificial time between the two queries
waitfor delay '000:00:10'
select * from t2
commit transaction

Now, execute the Select Statements code, then go back to your update statements code and execute the updates including the commit (you’ve got 10 seconds, so move quickly). Now go back to your select statements and wait for them to finish.

Here’s what you’ll get:

Using READ COMMITTED Isolation

Since your first query executed right away, it gets Test1, while the second query reads after 10 seconds (during which your update occurred) and sees Test2.

Now switch your test data back to its original state:

update t1 set value = 'Test1'
update t2 set value = 'Test1'

And change your select query to use Snapshot isolation:

set transaction isolation level snapshot
begin transaction
select * from t1
-- create some artificial time between the two queries
waitfor delay '000:00:10'
select * from t2
commit transaction

Now repeat the process… run your select query, switch windows and run your updates with commit, then switch back and wait for your select query to complete. Now you’ll get this:

shot2

Cool! Now I get Test1 from both tables, even though I updated the data between the two individual queries. How’s that work and why does it matter?

According to MSDN, SNAPSHOT isolation specifies that ‘data read by any statement in a transaction will be the transactionally consistent version of the data that existed at the start of the transaction‘.

This differs from READ COMMITTED in which data ‘can be changed by other transactions between individual statements within the current transaction‘.

This can be pretty important if you are publishing data to a multi-table warehouse and that multi-table publication should be considered as part of a ‘set’. If you use READ COMMITTED in that scenario you can get, essentially, a mismatched set of data. This is usually not a good thing.

If you’re reading from a single table from your Snapshot-enabled database, then using read committed will be fine. You’ll get your nice non-blocking clean read. If you need to read multiple tables in one transaction, however, and you want those reads to be consistent, you need to explicitly use SNAPSHOT isolation and start a transaction!

Yes, that’s right… you need a transaction wrapping your select statements. Transactions are not just for updates… shock and horror!

————–

More information here:

http://msdn.microsoft.com/en-us/library/ms173763.aspx

http://msdn.microsoft.com/en-us/library/tcbchxcb(v=vs.80).aspx

http://blogs.msdn.com/b/sqlcat/archive/2007/02/01/previously-committed-rows-might-be-missed-if-nolock-hint-is-used.aspx

Originally posted on Fortigent:

Yesterday we hosted RockNUG’s Robocode programming contest in our offices. It was a blast, literally, as virtual tanks destroyed each other on screen, controlled by AI programs the competitors had built.

Quotes of the morning:

“This is harder than I thought it would be.”

“Why can’t I make my gun fire?”

Most of the morning was spent in development…

The morning was spent on intense debugging and optimization

The morning was spent on intense debugging and optimization

Team Titans and Team Bobblebot prepare for battle

Team Titans and Team Bobble Bot prepare for battle

Focus, focus, focus...

Focus, focus, focus…

One of the unit tests is failing and nobody knows why

One of the unit tests is failing and nobody knows why

Dan Kempner's Rambot was all brute force and no style

Dan Kempner’s Rambot was all brute force and no style

@slawrenc1's Ed 209 was brave and bold, but bugger, stronger robots kept kicking sand in its face

@slawrenc1’s Ed 209 was brave and bold, but bigger, stronger robots kept kicking sand in its face

Dean Fiala's Octo won an early scrimmage but didn't reach the 1:1 finals

Dean Fiala’s Octo won an early scrimmage but didn’t reach the 1:1 finals

Intense concentration on the Johnny 5 vs Panicbot 1:1 battle

Intense concentration on the Johnny 5 vs Panicbot 1:1 battle

Bobble Bot fights for its life

Bobble Bot fights for its life

@brettwgreen's Shockwave won the 1:1 contest @brettwgreen’s Shockwave won…

View original 9 more words

Motion Chart using D3.js

January 16, 2013 2 comments

Ever since I saw Hans Rosling’s 2006 GapMinder TED Talk on African wealth and saw his use of Motion Charts, I’ve been looking for ways to implement this in my own websites.

Up until now, there really hasn’t been anything ‘framework-y’ enough to use it with your own data. Google bought the rights from GapMinder a year later, and you can do a limited motion chart in Google docs, but it would be difficult to integrate this into your own site.

Enter the new D3.js library…

This new javascript visualization library has an example that actually implements Rosling’s chart! Truly awesome!

I talked a bit about GapMinder and Motion Charts after seeing them in Jessica Moss‘ presentation on Power View at the latest RockNUG meetup. SQL Server’s new data analytics tool includes some motion chart capability which is cool but still limited to use within that application.

Now we can do motion charts (as well as hundreds of other visualizations) using this incredible new library in our web apps…

Here’s the Motion Chart example:

http://bost.ocks.org/mike/nations/

Here’s the D3.js sample page:

https://github.com/mbostock/d3/wiki/Gallery

And, if you haven’t seen Hans Rosling’s presentation… well, you simply must:

http://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen.html

Follow

Get every new post delivered to your Inbox.

Join 44 other followers