Management by PowerPoint

February 26, 2015 Leave a comment

Serious problems require a serious tool… Meetings should center on concisely written reports on paper, not fragmented bulleted talking points projected up on the wall.

-Edward Tufte

If every software meeting that you attend is being driven by a PowerPoint deck, don’t kid yourself: you’ve got yourself a serious problem. This is a sign of an acute infection of bureaucracy.

Let’s face it, most software decisions… whether it be design, prioritization, process, budget… are conversations that need to be had at a detailed level. PowerPoint presentations, almost by design, forces your meeting participants to gloss right over those details. It implies tacit agreement with high level, and mostly generic points and statements that often bear little to no significance to the problems being discussed.

Let me illustrate by identifying some glaring symptoms of the PowerPoint infection:

  • A detailed discussion breaks out about a specific topic between multiple participants. Perhaps this becomes an open debate and even gets a little contentious. The presenter interrupts and says “Those are great questions, let’s table them so we can move on to the next slide.” I’m very reluctant to stop any debate when it breaks out. There are times and reasons for doing so. But if you’re stopping this debate only so you can “get through the deck” then you are infected… seek help immediately.
  • After talking through a slide, the presenter asks “Does everyone agree with this slide?” as if the slide is an end unto itself! The deck itself accomplishes nothing and, at best, only serves to facilitate discussion and debate. If you think “agreeing to the slide” represents some consensus that is meaningful, than you may be patient zero in the epidemic.
  • As you and your boss are preparing the deck for the big meeting, you spend 60% of your time messing with the fonts, colors, shapes and layout of the deck… rather than actually laying out the details of the problems and solutions you are going to discuss. Seriously, if you find yourself doing this than just delete the deck straight out, pick up a pencil… and just stab yourself in the eye with it.

Look, your job is to inform, educate and make an argument about potential solutions to problems to your coworkers. You should not have to ‘dumb down’ these arguments to your peers and higher-ups to get their agreement. It’s insulting to them… and shows zero investment on your part in framing the problems, and presenting and defending your solution.

Want to know what you should do instead? Write a paper… a detailed paper… on all the ins-and-outs of the problem and your potential solutions. Then send that out and have everyone read it before the meeting… in which you will discuss everything that you’ve outlined. Your bosses should possess both the acumen and time to have intelligent discourse about the problems that your company is facing… if they’re neither of those things, then maybe your PowerPoint infection has become terminal.

Related info:

PowerPoint Does Rocket Science–and Better Techniques for Technical Reports  via

Jeff Bezos And The End of PowerPoint As We Know It – via

Rockets before Rovers: The Agile Moon Landing Project

January 25, 2014 3 comments

When a pack of Software Engineers met in Utah in early 2001 to discuss a new way of building software, the summit did not produce a brand new SDLC, but rather a simple set of guidelines for software teams to follow in order to achieve better results with less friction. This ‘Agile Manifesto’ was as much a guiding force for future software development as it was an indictment of the processes that continue to plague the industry. As simple as the guidelines are, they can be profound when applied properly:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over Processes and tools

Working software over Comprehensive documentation

Customer collaboration over Contract negotiation

Responding to change over Following a plan

That is, while there is value in the items on the right, we value the items on the left more.

I find it informative to invert the guidelines and ask ourselves what it looks like to build software badly. If you want to build it the wrong way, you’ll define the process and choose tools first, write a whole bunch of documentation and requirements before you write any code, negotiate a detailed client contract but ignore those clients while you’re writing the code, and stick firmly to your plan regardless of what happens along the way. As horrible as that sounds, I’m afraid it also sounds pretty familiar to anyone who’s been in the industry for a while.

As simple and straightforward as those guidelines are… and I do believe they should be at the heart of any software development exercise… they do not prescribe any particular SDLC to achieve those results. In fact, that would be quite contrary to the first guideline!

In practice, however, all ‘agile’ projects are run as some form of iterative, or incremental, process. And that process is not a new one. Many people believe that ‘Agile’ is some new, cutting-edge software process and that moving to this new process is somehow revolutionary and, perhaps, risky. As a result, I think there’s a lot of resistance to those changes.

In truth these iterative approaches have been around for quite a long time, going back as far back as William Deming in the 1940’s pushing his Plan-Do-Check-Act cycles for improving the automotive industry. NASA’s Project Mercury, the opening project in the ‘space race’, as well as Project Gemini and the moon landing all followed incremental development cycles. Project Mercury did half-day iterations with test first development! Not such a new idea after all, just an industry that’s been very slow to catch on.

So, what does it really mean to do iterative development? Well, very simply, it means that NASA didn’t get hung up on designing an anti-gravity pen to allow astronauts to write in space before they even had a rocket to get them there! They knew, first things first, they better figure out how to make a reliable rocket capable of putting something in orbit, first… and that’s where they started.

Rockets before Rovers

And that’s the absolute heart of agile development. You don’t need a big fancy process with multiple steps, regular meetings, status updates, burndown charts, etc… if you do a little bit of planning, build your features incrementally, talk to each other and your clients, and make changes whenever necessary… then Congratulations, you’re doing agile software development! I see far too many groups getting all hung up on doing all the little details of Scrum process, or Extreme Programming or Kanban… all of those things are great but you don’t need to start with all that.

Go ahead and read about Scrum and Kanban… it will be helpful eventually, but you really need to start simple. The big secret is that you should design your agile process using agile methodology! Start with the core (iterations and demos), and end each iteration with a retrospective meeting to see what’s working and what’s not and fix it. Be patient, and your process will improve week over week. Plan your projects carefully from the core outwards and always remember: Rockets before Rovers!

Related Links:

The Power and Simplicity of the Data Warehouse

Originally posted on Fortigent:

“In many ways, dimensional modeling amounts to holding the fort against assaults on simplicity”
– Ralph Kimball, The Data Warehouse Toolkit

Although there are many reasons that an organization may consider building a “data warehouse”, most of the time the overwhelming reason is performance related… a report or web page is just taking too long to run and they want it to be faster. If that data could be pre-computed than that page would be really fast!

I’m here to tell you that speed is not the most important reason for building the warehouse.

When you’ve got a system where multiple consumers are reading the data from the transactional store and doing their own calculations, you create a whole bunch of other problems beyond just the speed issue:

  • You create the potential for multiple different and conflicting results in that system. At least a few of those consumers will inevitably…

View original 587 more words

The coolest SQL Server function you never heard of

Ever heard of the SQL_VARIANT_PROPERTY function? I didn’t think so.

SQL Server developers very often make the mistake of making their NUMERIC fields too large. When faced with a choice of how to size the column, they’ll often think “make it way larger than I need to be safe”.

This works OK as long as you simply store and read these values, but if you ever have to perform math with these columns, particularly some form of division or multiplication, you may find your results mysteriously losing precision.

This is because SQL Server can only store a maximum of 38 digits per number… if the results of your mathematic expression may yield a number larger than that, SQL Server will be forced to downsize it and remove digits from the mantissa as a result.

For example, let’s say you are dividing two NUMERIC(30, 10) numbers as follows:

declare @x NUMERIC(30, 10) = 10.0
declare @y NUMERIC(30, 10) = 3.0
declare @result NUMERIC(38,10)
set @result = @x / @y
print @result

Your result, even though you were hoping for 10 digits of precision actually gives you 3.3333333300… only 8 digits.

What happened?

Well, When you divide two NUMERICs with precision of 30, you can end up a result too large to store… SQL Server is forced to shrink the right side of your result to accommodate a maximum size of 38 for the result.

For division this is a relatively nasty determination. The nitty-gritty algorithm of how this works can be found here:

A very useful function for figuring out what your result will yield is the SQL_VARIANT_PROPERTY function. You use it as follows:

declare @x NUMERIC(30, 10) = 10.0
declare @y NUMERIC(30, 10) = 3.0
declare @result NUMERIC(38,10)
set @result = @x / @y
print @result
select SQL_VARIANT_PROPERTY(@x / @y, 'BaseType') AS BaseType,
 SQL_VARIANT_PROPERTY(@x / @y, 'Precision') AS Precision,
 SQL_VARIANT_PROPERTY(@x / @y, 'Scale') AS Scale

And the results look like this:


The SQL_VARIANT_PROPERTY function can tell you details about the type derived from your expression. Here we can see that it reduced the scale to 8 which is why I’ve lost two digits of precision.

You’ll need to reduce your base types or cast them before doing the math to have enough room to get the desired precision in this case.

Documentation on SQL_VARIANT_PROPERTY here:

Details on Precision and Scale here:

Win a D800 or 5D Mark III

Cool contest to win a Nikon D800 or Canon 5D Mark III.

Sponsored by which is a website that allows people to find wedding photographers.

Big thanks to the SnapKnot wedding photography directory for offering this great camera giveaway!


Reading from Snapshot Databases With Multiple Tables

March 21, 2013 1 comment

If you haven’t explored using Snapshot isolation in SQL Server, I recommend you give it a look. A snapshot enabled database allows the reader to get  a clean read without blocking.

Prior to this capability, the only way to guarantee a non-blocking read from the database was to sprinkle NOLOCK statements all over your queries. Clearly this is a bad idea because you’re getting a dirty read, but it’s really much worse than that.

Enter Snapshot isolation… When querying using Snapshot isolation, your query will read the state of the rows at the moment the query begins, prior to any outstanding transactions against those rows. This allows you to get the last known committed state of those rows (clean data) without having to wait for outstanding transactions to commit. This is critical behavior if you want, say, a website that doesn’t have slow loading pages.

Now, it gets interesting when you’re trying to read multiple tables in one batch from your Snapshot database. Let me show you an example.

Start with two sample tables:

create table t1 ( id int identity(1, 1), value varchar(50))
create table t2 ( id int identity(1, 1), value varchar(50))
insert into t1 values ('Test1')
insert into t2 values ('Test1')

Now, setup a set a couple of update statements, but don’t execute them yet:

-- Update statements
begin transaction
update t1 set value = 'Test2'
update t2 set value = 'Test2'
commit transaction

In another window, setup a read as follows:

-- Select statements
set transaction isolation level read committed
begin transaction
select * from t1
-- create some artificial time between the two queries
waitfor delay '000:00:10'
select * from t2
commit transaction

Now, execute the Select Statements code, then go back to your update statements code and execute the updates including the commit (you’ve got 10 seconds, so move quickly). Now go back to your select statements and wait for them to finish.

Here’s what you’ll get:

Using READ COMMITTED Isolation

Since your first query executed right away, it gets Test1, while the second query reads after 10 seconds (during which your update occurred) and sees Test2.

Now switch your test data back to its original state:

update t1 set value = 'Test1'
update t2 set value = 'Test1'

And change your select query to use Snapshot isolation:

set transaction isolation level snapshot
begin transaction
select * from t1
-- create some artificial time between the two queries
waitfor delay '000:00:10'
select * from t2
commit transaction

Now repeat the process… run your select query, switch windows and run your updates with commit, then switch back and wait for your select query to complete. Now you’ll get this:


Cool! Now I get Test1 from both tables, even though I updated the data between the two individual queries. How’s that work and why does it matter?

According to MSDN, SNAPSHOT isolation specifies that ‘data read by any statement in a transaction will be the transactionally consistent version of the data that existed at the start of the transaction‘.

This differs from READ COMMITTED in which data ‘can be changed by other transactions between individual statements within the current transaction‘.

This can be pretty important if you are publishing data to a multi-table warehouse and that multi-table publication should be considered as part of a ‘set’. If you use READ COMMITTED in that scenario you can get, essentially, a mismatched set of data. This is usually not a good thing.

If you’re reading from a single table from your Snapshot-enabled database, then using read committed will be fine. You’ll get your nice non-blocking clean read. If you need to read multiple tables in one transaction, however, and you want those reads to be consistent, you need to explicitly use SNAPSHOT isolation and start a transaction!

Yes, that’s right… you need a transaction wrapping your select statements. Transactions are not just for updates… shock and horror!


More information here:

Originally posted on Fortigent:

Yesterday we hosted RockNUG’s Robocode programming contest in our offices. It was a blast, literally, as virtual tanks destroyed each other on screen, controlled by AI programs the competitors had built.

Quotes of the morning:

“This is harder than I thought it would be.”

“Why can’t I make my gun fire?”

Most of the morning was spent in development…

View original


Get every new post delivered to your Inbox.