Mocking explained - The Developer

globestupendousSecurity

Dec 3, 2013 (3 years and 8 months ago)

137 views

1
The DEVELOPER
No 4/2013
Donation
SMS
API
Survey
Posisjonering
Merge
SMS
Loyalty Club
LOVE
SMS
Positioning
V
oting
Voting
Mobile
marketing
Marketing
Password
Reminders
Wa
r
nings
Ordering
Confirmation
SMS
Billing
HLR
Verifying
Quest SMS
Loyalty Club
Password
Survey
HLR
Contact
update
Rabekkgata 9, Moss - Norway
sales@vianett.no l +47 69 20 69 20
For more information:
www.vianett.com
We love SMS!
The fast way to integrate
SMS
into your software
Donation
Reminders
Survey
Positioning
Merge
SMS
Payment
Payment
Positioning
Voting
Voting
Marketing
Password
Password
Booking
Ordering
Or
dering
Confirmation
SMS
Billing
MMS
HLR
Verifying
Get inspired
Ask
Sign up now for a free developer account: www.vianett.com/developer
Java
HTTP
ASP
COM
SMPP
MSSQL
YAP
SMTP
FTP
Web Services
C#
“ViaNett`s API`s are simple
and easy to understand,
well documented and their
support is excellent”

Erik Lerbæk IT-manager,
Viking Redningstjeneste AS
Returadresse: ProgramUtvikling AS. Postboks 1, 1330 Fornebu
NDC LONDON SPECIAL EDITION
Getting
Serious with
JavaScript
Martin Beeby
Page 56
Designing with
types in F
#
Scott Wlaschin
Page 4
Dror Helper
Page 28
Mocking
explained
Join
NDC London!
3
NDC Oslo has always been a very special conference for me. It's not about the accom-
modation (which is lovely) or the city (which is beautiful) or the venue (which is awesome).
It's not even about the speakers and sessions, even though they're diverse and fascinat-
ing. It's about the people you meet, the ideas sparking off each other in the hallways, and
the real community feeling. That doesn't happen by chance - there's some special magic
about the way the conference is organized which encourages all of this to happen, cel-
ebrating what each and every attendee has to offer. That's why I'm so excited about NDC
London: the same magic, right on my doorstep. I can't wait - roll on December!
Jonathan Skeet
ARTICLES
Designing with types in F
#
.............................................................................
p. 4
Implementing (the original) L isp in Python
..........................
p. 8
Understanding the world with F
#

..................................................
p. 20
Mocking explained
.................................................................................................
p. 28
OWASP Top Ten
...........................................................................................................
p. 34
Developers Baking in Security
............................................................
p. 36
Hello JavaScript
........................................................................................................
p. 38
Powerful questions
...............................................................................................
p. 42
London
......................................................................................................................................
p. 44
Adding Animations as the fi nal gloss
for your Windows Phone app
...............................................................
p. 46
Getting Serious with JavaScript
....................................................
p. 56
Modular databases with LevelDB and Node.js
..........
p. 60
Feedback for software development
......................................
p. 64
Queue centric workfl ow and windows azure
................
p. 68
Project Design A Call for Action
......................................................
p. 72
COURSES
Course overview Oslo
.........................................................................................
p. 78
Course overview London
...............................................................................
p. 80
NDC LONDON Agenda
.......................................................................................
p. 84
Contents
N
O
R
D
I
C

E
C
O
L
A
B
E
L
2
4
1






P
r
i
n
t
i
n
g

c
o
m
p
a
n
y






6
7
2
The DEVELOPER N
o
4
2013
NDC London Edition
and meet your target group!
For more information about advertisement, please contact Charlotte Lyng at
+47 93 41 03 57 or charlotte@programutvikling.no
Advertise in The Developer
1
The DEVELOPER
No 2/2013
CODE FOR
KIDS
Page8
POWER RABBIT
ByAlvaro Videla
Page 10
PLAYING WITH DRONES
AND STREAMS
Bjørn Einar Bjartnes
Page 4
NDC SPECIAL EDITION FOR DEVELOPERS AND LEADERS
NORWEGIAN
DEVELOPERS
CONFERENCE 2013
Oslo Spektrum, Oslo
June12-14
Pre-workshops, 10-11 June
John Zablocki
Dominick Baier Alvaro Videla Jon Skeet Giovanni Asproni
Helge Grenager
Solheim
Jørgen Vinne
Iversen
Bryan Hunter Stuart Lodge,
Mark Seemann Torstein
Nicolaysen
Vidar Kongsli
Alan Smith Michael Heydt
Russ Miles Jon McCoy
Contributors:
Developer_2_2013_10mai.indd 1
15.05.13 14:24
DEVELOPER
PLAYING WITH DRONES
FOR DEVELOPERS AND LEADERS
CONFERENCE 2013
????????? ???•?????
???••????•???•???•???•???
1
The DEVELOPER
No 3/2013
Pitfalls of
grid computing
Trond Arve Wasskog
Page 14
Java 8 to
the rescue!?
Fredrik Vraalsen
Page 4
Tim Berglund
Page 18
JAVAZONE SPECIAL EDITION FOR DEVELOPERS AND LEADERS
Configuring

Laptops is
HARD
JavaZone
11-12 SEP
page 8
See
Developer_3_2013.indd 1
23.08.13 15:29
Member of Den Norske Fagpresses Forening
Design: Ole H. Størksen
Uncredited photos are from Shutterstock, except portraits.
Print run: 13,000
Print: Merkur Trykk AS
Editor:
Kjersti Sandberg
Marketing Manager:
Charlotte Lyng
Publisher: Norwegian Developers Conference AS
By Programutvikling AS and DeveloperFocus Ltd
Organisation no.: 996 162 060
Address: Martin Linges vei 17-25,
1367 Snarøya, Norway
Phone: +47 67 10 65 65
E-mail: info@programutvikling.no
The fi rst NDC London will be on the 2-6 December and is expanding on the success of the Norwegian Developers
Conference in Oslo. The venue is already booked and both pre-conference and conference will be at the ICC
suites at the ExCel venue.
Join

NDC London
4
UNDERSTANDING F
#
TYPES
In F
#
, there are two primary types used for modelling
domains: “record” types and “discriminated union” types.
A record type stores a number of fields in a familiar way,
like this:
type Customer =
{CustomerId: int; Name: string }
A discriminated union type represents a set of distinct
choices, like this:
type ContactInfo =
| Email of string
| PhoneNumber of int
A value of this type must be either an email with an associ-
ated
string
value, or a phone number with an associated
int
value, but not both.
As we’ll see in the following examples, choice types are
extremely useful in creating a clear design.
THE USEFULNESS OF CHOICE TYPES
Consider a structure for storing information about a pay-
ment method:
type PaymentMethodType =
Cash | Card // an enum
type PaymentMethodInfo = {
PaymentMethod: PaymentMethodType;
CardNumber: string;
CardExpiryDate: string }
The problem with this design is that it is not clear when the
CardNumber and CardExpiryDate values are required. If the
payment method is “Cash” neither field is required, but if
the payment method is “Card” then both fields are required.
The design above does not make these requirements clear.
We can refactor the design by making a special type for
the card info, and then change the main type to be a choice
between Cash (with no additional required information)
and Card, with the additional required information being
described in the new mini-structure.
type CardInfo = {
CardNumber: string;
ExpiryDate: string }
type PaymentMethodInfo =
| Cash
| Card of CardInfo
OPTIONAL VALUES
Another important use of a choice type is to explicitly docu-
ment optional values. In F
#
, values are not allowed to be
null, which implies that all values are always required. But
what if a value is genuinely optional? How can we model
that?
The answer is to create a special type with two choices. One
choice represents “value present” and the other choice rep-
resents “value missing”. For example, to model an optional
int, we might define a choice type like this:
type OptionalInt =
| SomeInt of int
| Nothing
This is such a common requirement that this type is built
into F
#
and is called the
Option
type. With the option type
available, we can now clearly indicate which fields are not
required.
For example, if the card info needed to be extended to
include an optional “valid from” field, we could define it
like this:
type CardInfo = {
CardNumber: string;
ExpiryDate: string;
ValidFrom: string option}
In F
#
, designing data types is a very important part of the development
process. The thoughtful use of types can not only make a design clearer,
but also ensure that all code conforms to the domain rules. This article
will demonstrate a few examples of this.
Designing with
types in
F
#
5
© Igor Petrovic/Shutterstock
By Scott Wlaschin
6 7
STATES
Another common scenario is when an object can be in dif-
ferent states, with different information needed for each
state.
Consider information about an order than might or might
not be shipped yet:
type ShippingStatus = {
IsShipped: bool;
ShippedDate: DateTime;
TrackingNumber: string }
Again, the definition leaves room for ambiguity. Can
IsShipped
be true without specifying a
ShippedDate
? Is it OK
to set the
TrackingNumber
to something even if the
IsShipped

flag is false?
Refactoring to a choice type eliminates these ambiguities
completely:
type ShippingInfo = {
ShippedDate: DateTime;
TrackingNumber: string option}
type ShippingStatus =
| Unshipped
| Shipped of ShippingInfo
The new design makes it clear that
ShippedDate
and
Track-
ingNumber
can only be set when the status is “shipped”. And
then
ShippedDate
is required and
TrackingNumber
is optional.
WRAPPING PRIMITIVE TYPES
Finally, let’s look at a common problem: domain types that
have the same representation but which must never be
confused.
For example, you might have an
OrderId
and a
CustomerId
,
both of which are stored as ints. But they are not really ints.
You cannot add 42 to a
CustomerId
. More subtly,
Custom-
erId
(42) is not equal to
OrderId
(42). In fact, they should not
even be allowed to be compared at all.
In F
#
, you can model this by wrapping the primitive type,
like this:
type CustomerId = CustomerId of int
type OrderId = OrderId of int
We now have completely distinct types. This not only makes
the design clear but also enables the type-checker to pre-
vent any mix-ups in assignment or parameter passing.
CONCLUSION
I hope this has whetted your appetite for using F
#
as a
design tool. To learn more about this topic, search the inter-
net for “designing with F
#
types”.
Scott Wlaschin is a .NET developer,
architect and author with over 20 years
experience. He blogs about F
#
at
fsharpforfunandprofit.com.
7
8
But sometimes I also find that it pays off to take a look
at the old, rather than the new. It's good to go back to the
roots, to see where it all comes from, and to have look at
the fundemental ideas. Today we'll take a trip back to 1960,
to the origins of Lisp as described by John McCarthy in his
paper Recursive Functions of Symbolic Expressions and
Their Computation by Machine, Part 1
[1]
.
There is really nothing novel in this article. The ideas all
belonged to McCarthy, but sometimes it's good to just
study the masters. Lets explore the origins of one of our
most powerful (family of) programming languages. We'll
start by briefly covering the Lisp syntax and semantics,
before moving on to an implementation of the language
itself in Python.
A SHORT LISP 101
Lets start out by explaining the basics of Lisp. If you're al-
ready familiar with this, I urge you to move on to the next
section instead.
I have tried to keep this section brief. If you would like some-
thing a bit more elaborate, I really recommend The Roots
of Lisp
[2]
by Paul Graham, where he essentially describes
McCarthy's Lisp by example and then show an implementa-
tion in Common Lisp.
As programmers we take great pride in keeping up with new developments
in libraries, languages and tools (and usually have a lot of fun doing it as
well). If we stop learning, we know we'll soon end up like the Cobol-
programmers of today. And this is of course a good thing — there's always
something new and exciting to learn, something on which to sharpen our
skills.
Implementing (the original)


isp in Python
9
© Asmus/ Shutterstock
By Kjetil Valle
10 11
The parser works like this:
The parse function takes one argument, the program
string, and returns the corresponding AST. There is also
an opposite function unparse which converts ASTs back
into Lisp source strings.
>>> from rootlisp.parser import unparse
>>> unparse(ast)
"(lambda (x) (cons x (cons x '())))"
Interpreting
Before we move on to evaluating the ASTs, let's define an-
other useful function,
interpret
, which we'll be using to test
our language as we go:
def interpret(program, env=None):
ast = parse(program)
result = eval(ast, env if env is not None else [])
return unparse(result)
The function combines the power of
parse
and
eval
.
interpret
takes a Lisp program as a string, parses it, and
finally evaluates the parsed expression. Since the evalu-
ated expression might be a Lisp data structure (and even
valid Lisp code) we "unparse" it back to it's corresponding
source string. This is done to hide our internal ASTs from
the user of the Lisp.
The Evaluator
With parsing out of the way, and armed with the
interpret

function to test our code, it's time to have a look at the core
of the langage, the
eval
function. It looks like this:
A wee bit of syntax
Lisp programs consists of something called s-expressions,
which stands for symbolic expressions. S-expressions are
defined recursively, and consists of either a single atom or
a list, which contains other s-expressions.
An atom is akin to what many modern languages would call
an identifier. It consists of a series of letters or symbols —
anything other than parentheses, single quote and whites-
pace, basically. Examples of atoms would be
a
,
foo/bar!
or
+
.
(Note that we simplify a bit here, compared to McCarthys
original description. By limiting atoms to not contain
spaces, we eliminating the need for commas separating
atoms within lists. In the original Lisp, atoms were also
limited to upper case letters, a restriction I don't see the
need to enforce here.)
Lists use the parenthesis syntax that have become so
iconic to Lisp. A list is expressed by a pair of parentheses
enclosing a number of elements. Each element is another
s-expression, which might be either an atom or a list. Here
are a copule of examples:
(list of only atoms)
(another list (with some ((nested)) (lists inside)))
In addition, although it's actually not in the original Lisp,
we include a shorthand syntax
'x
which is evaluated as
(quote x)
, where
x
is any s-expression. This way,
'(a b c)

is interpreted as
(quote (a b c))
and similarly,
'foo
as
(quote foo)
. We will later see why this turns out to be useful.
The basic semantics
When evaluating an s-expression
e
, the following rules
apply.
• If
e
is an atom its value is looked up in the environment.
• Otherwise, the expression is a list like
(<e0> <e1> … <en>)
,
which is evaluated as a function application.
How this is handled depends on the first element of the
list,
e0
.
°
If
e0
is the name of one of the builtin (axiomatic)
forms, it is evaluated as described below.
°
If
e0
is any other atom, its value is looked up. A new
list, with the value of
e0
replacing the first element
is then evaluated.
°
If
e0
is not an atom, but a list of the form

(lambda (<a1> … <an>) <body>)
, then
e1
through
en

is first evaluated. Then
body
is evaluated in an
environment where each of
a1
through
an
points to
the value of the corresponding
en
. This constitutes
a call to an anonymous function (i.e. a
lambda
function).
°
If
e0
is of form
(label <name> <lambda>)
where

lambda
is a lambda expression like the one above,
then a new list with
e0
replaced by just the
lambda
is
constructed. This list is then evaluated in an environ-
ment where
name
points to
e
. The
label
notation is
how we solve the problem of defining recursive
functions.
The axiomatic forms
The axiomatic forms are the basis on which the rest of
the language rests. They behave as follows:

(quote e)
returns
e
without evaluating it first.

(atom e)
evaluates
e
and returns the atom
t
if the re-
sulting value is an atom, otherwise
f
is returned. (Since
we have no boolean type in our language, these two
atoms are treated as true and false, respectively.)

(eq e1 e2)
evaluates to
t
if both
e1
and
e2
evaluates to
the same atom, otherwise
f
.

(car e)
evaluates
e
, which is expected to give a list, and
returns the first element of this list.

(cdr e)
is the opposite of
car
. It returns all but the first
element of the list gotten by evaluating
e
. If the list only
holds only one element,
cdr
instead returns the atom
nil
.

(cons e1 e2)
evaluates both
e1
and
e2
, and returns a list
constructed with the value of
e1
as the first element
and the value of
e2
as the rest. If
e2
evaluates to the
atom
nil
, the list
(e1)
is returned.

(cond (p1 e1) … (pn en))
is the conditional operator.
It will evaluate predicates
p1
to
pn
in order, until one of
them evaluates to
t
, at which time it will evaluate the
corresponding
en
and return its value.
The evaluation rules above are, surprisingly, all we need
to implement Lisp. In addition, however, I'd like to include
another form that isn't explicitly described by McCarthy,
but which is included in Graham's article. It could strictly
speaking be replaced by doing a lot of nested l
abel
s but,
but this would make things a lot less readable.

(defun name (a1 … an) <body>
) is a way to define
functions and then later use them outside of the
define

expression. It does this by adding a new binding to the
environment it is itself evluated in:
name



(lambda (a1 … an) <body>)
.
IMPLEMENTATION IN PYTHON
Now, with the syntax and core semantics of the language
outlined, lets look at how to make this happen in Python.
The parser
The first step when implementing a language is usually
the parser. We need some way to go from programs as
strings to some datastructure we can interpret. Such a
datastructure is usually called the Abstract Syntax Tree
(AST) of the program.
Since Lisp is a language largely without syntax, with
parentheses and atoms used for everything, writing the
parser is relatively easy and uninteresting. This is not
what I want to focus on in this article, so we'll skip over
the details here. Feel free to have a look at the code for
the parser
[3]
before we move on, if you like.
>>> from rootlisp.parser import parse
>>> program = "(lambda (x) (cons x (cons x '())))"
>>> ast = parse(program)
>>> ast
['lambda', ['x'], ['cons', 'x', ['cons', 'x', ['quote', []]]]]
def is_atom(exp):
"""Atomes are represented by strings in our ASTs"""
return isinstance(exp, str)
def eval(exp, env):
"""Function for evaluating the basic axioms"""
if is_atom(exp): return lookup(exp, env)
elif is_atom(exp[0]):
if exp[0] == "quote": return quote(exp)
elif exp[0] == "atom": return atom(exp, env)
elif exp[0] == "eq": return eq(exp, env)
elif exp[0] == "car": return car(exp, env)
elif exp[0] == "cdr": return cdr(exp, env)
elif exp[0] == "cons": return cons(exp, env)
elif exp[0] == "cond": return cond(exp, env)
elif exp[0] == "defun": return defun(exp, env)
else: return call_named_fn(exp, env)
elif exp[0][0] == "lambda": return apply(exp, env)
elif exp[0][0] == "label": return label(exp, env)
12 13
Once again, we treat
nil
as the empty list.
>>> interpret("(cons 'a '(b c))")
'(a b c)'
>>> interpret("(cons 'a 'nil)")
'(a)'
Evaluating cond
The expressions passed as arguments to
cond
are all lists
of two elements. We evaluate the first element of each of
the sublists in turn, until one evaluates to
t
. When the first
t
is found, the second element of that list is evaluated and
returned.
def cond(exp, env):
# (cond (p1 e1) (p2 e2) …)
for p, e in exp[1:]:
if eval(p, env) == 't':
return eval(e, env)
Like in McCarthy's original Lisp, our
cond
is also undefined
for cases where no
p
expressions evaluate to
t
.
>>> program = """
... (cond ((eq 'a 'b) 'first)
... ((atom 'a) 'second))
... """
>>> interpret(program)
'second'
Evaluating defun
As noted above, the
defun
form isn't one of those specified
by McCarthy, but we include it anyway to make the language
easier to use. Evaluating a
defun
expression simply extends
the environment where it is called with a
label
structure
containing a
lambda
. The evaluation of
lambda
s and
label
s
is described below.
def defun(exp, env):
# (defun my-fun (a1 …) body)
name, params, body = exp[1], exp[2], exp[3]
label = ["label", name, ["lambda", params, body]]
env.insert(0, (name, label))
return name
To see what's happening, lets look at the environment after
evaluating a
defun
form.
As you can see,
eval
takes two arguments
exp
and
env
.
exp

is one of the ASTs returned by
parse
,
env
holds a list of as-
sociations which represent bindings from atoms to values
in the environment.
We have now covered all the cases we need in order to im-
plement the Lisp. Lets look at the implementation of each in
turn. Keep the structure of
eval
in mind when we go through
each case.
Evaluating atoms
The first case we need to cover is when the evaluated ex-
pression is an atom. The value of an atom is whatever it
is bound to in the environment, so we do a lookup of the
atom in
env
.
def lookup(atom, env):
for x, value in env:
if x == atom:
return value
Lets have a look at how this works in the REPL:
>>> from rootlisp.lisp import interpret
>>>
>>> env = [('foo', 'bar')]
>>> interpret('foo', env)
'bar'
Evaluating quote
The next form is
quote
, which is incredibly easy to imple-
ment: all we need to do is simply to return whatever the
argument was, without evaluating it first.
def quote(exp):
# (quote e1)
return exp[1]
And it works as expected:
>>> interpret('(quote a)')
'a'
>>> interpret("'a")
'a'
>>> interpret("'(a (b (c) d))")
'(a (b (c) d))'
Evaluating atom
The next case,
atom
determines whether the value of
exp
is
atomic or not.
def atom(exp, env):
# (atom e1)
val = eval(exp[1], env)
return 't' if is_atom(val) else 'f'
Our Lisp does not have any boolean datatypes, so we sim-
ply return the atoms
t
or
f
depending on whether
exp
is an
atom or not.
>>> interpret("(atom 'a)")
't'
>>> interpret("(atom '(a b c))")
'f'
>>> interpret("(atom (atom 'a))")
't'
Evaluating eq
The
eq
function is defined as t if the value of its two argu-
ments evaluates to the same atom.
def eq(exp, env):
# (eq e1 e2)
v1 = eval(exp[1], env)
v2 = eval(exp[2], env)
return 't' if v1 == v2 and is_atom(v1) else 'f'
>>> interpret("(eq 'a 'a)")
't'
>>> interpret("(eq 'a 'b)")
'f'
>>> interpret("(eq '(a) '(a))")
'f'
Evaluating car and cdr
The
ca
r and
cdr
forms both evaluate the argument, expect-
ing the resulting value to be a list.
car
returns the first item
of the list;
cdr
returns the rest of the list, i.e. everything but
the first element.
def car(exp, env):
# (car e1)
return eval(exp[1], env)[0]
def cdr(exp, env):
# (cdr e1)
lst = eval(exp[1], env)
return 'nil' if len(lst) == 1 else lst[1:]
Notice that if the list contains only one element, cdr return
the atom nil which represents the empty list.
>>> interpret("(car '(a b c))")
'a'
>>> interpret("(cdr '(a b c))")
'(b c)'
>>> interpret("(cdr '(a))")
'nil'
Evaluating cons
cons
, short for construct, returns a list constructed with
the value of the first argument as the first element, and the
value of the second argument as the rest of the list.
def cons(exp, env):
# (cons e1 e2)
rest = eval(exp[2], env)
if rest == 'nil':
rest = []
return [eval(exp[1], env)] + rest
>>> env = []
>>> interpret("""
... (defun pair (x y)
... (cons x (cons y 'nil)))
... """, env)
'pair'
>>> env
[('pair', ['label', 'pair', ['lambda', ['x', 'y'], ['cons', 'x', ['cons', 'y', ['quote', 'nil']]]]])]
14 15
null
returns
t
for any list that is without elements, and
f

otherwise.
> (null '(foo bar))
f
> (null (cdr '(a)))
t
We might also define the common logical operators.
(defun and (x y)
(cond (x (cond (y 't) ('t 'f)))
('t 'f)))
(defun or (x y)
(cond (x 't)
('t (cond (y 't) ('t 'f)))))
(defun not (x)
(cond (x 'f)
('t 't)))
Both
and
,
or
and
not
works as one would expect.
> (not 'f)
t
> (not (and 't (or 't 'f)))
f
Further we can define some functions working on lists. First
append
, which takes two lists as arguments, returning their
concatination.
(defun append (x y)
(cond ((null x) y)
('t (cons (car x) (append (cdr x) y)))))
A couple of tests shows that it works:
> (append '(1 2 3) '(a b c))
(1 2 3 a b c)
> (append 'nil '(a b))
(a b)
Another useful function is
zip
, which takes two lists as ar-
guments, returning a list of pairs where each pair consists
of the corresponding elements from each of the argument
lists.
(defun pair (x y)
(cons x (cons y 'nil)))
(defun zip (x y)
(cond ((and (null x) (null y)) 'nil)
((and (not (atom x)) (not (atom y)))
(cons (pair (car x) (car y))
(zip (cdr x) (cdr y))))))
The helper function
pair
is simply used as a convenience
for creating lists of two elements.
> (zip '(a b c) '(1 2 3))
((a 1) (b 2) (c 3))
COMPLETING THE LANGUAGE
These functions are all nice and well, but one thing is still
lacking. One of the central concepts in Lisp is that code is
data, and vice versa. We already have
quote
which enables
us to convert code into lists, but we still need some way to
evaluate lists as if they were Lisp code again.
Our Lisp cannot do this yet. But, fortunately, we have enough
pieces to be able to implement it within the Lisp itself!
Before we go on, lets just define a few shorthand notations
for working with combinations of
car
and
cdr
. These will
help keep the code a bit more concise and readable as we
move on.
(defun caar (lst) (car (car lst)))
(defun cddr (lst) (cdr (cdr lst)))
(defun cadr (lst) (car (cdr lst)))
(defun cdar (lst) (cdr (car lst)))
(defun cadar (lst) (car (cdr (car lst))))
(defun caddr (lst) (car (cdr (cdr lst))))
(defun caddar (lst) (car (cdr (cdr (car lst)))))
Next, we need a function to help us look up values from an
environment.
(defun assoc (var lst)
(cond ((eq (caar lst) var) (cadar lst))
('t (assoc var (cdr lst)))))
assoc
takes two arguments: the variable we wish to look up,
var
, and a list of bindings,
lst
. The bindings in
lst
are lists of
two elements, and
assoc
simply returns the second element
of the first pair where the first element is the same as
var
.
> (assoc 'x '((x a) (y b)))
a
> (assoc 'y '((x a) (y b)))
b
Evaluating function calls
To round of the case when the first element in
exp
in
eval
is
an atom, we simply look this atom up in the environment,
expecting to find a function. A new list with this function as
the first element is then evaluated instead.
def call_named_fn(exp, env):
# (my-fun e1 …)
fn = lookup(exp[0], env)
return eval([fn] + exp[1:], env)
Lets try testing this by calling the
pair
function we defined
with
defun
above.
Evaluating lambda application
In the example above,
pair
is looked up in the environment
and a new s-expression is evaluated. This new expression
holds a function rather than an atom as the first element.
(Actually, it holds a
label
with a function, but the
label
is
stripped away in an intermediate step as explained bellow.)
Thus, we end up evaluating an expression where the first
element looks something like (
lambda (list of parameters)
body)
. The rest of the elements in
exp
are the arguments to
the function. The
apply
function evaluates such expressions.
def apply(exp, env):
# ((lambda (a1 …) body) e1 …)
fn, args = exp[0], exp[1:]
_, params, body = fn
evaluated_args = map(lambda e: eval(e, env), args)
new_env = zip(params, evaluated_args) + env
return eval(body, new_env)
The first line separates the lambda-expression
fn
and the
arguments. The function
fn
is then split further into its list
of parameters and the body. The arguments are then each
evaluated, before they are merged with the corresponding
parameters and put into the environment. Finally, the body
of the function is evaluated in this new environment.
>>> program = """
... ((lambda (x y) (cons x (cdr y)))
... 'z
... '(a b c))
... """
>>> interpret(program)
'(z b c)'
Evaluating label application
The
lambda
syntax above is fine for defining normal non-
recursive functions. It is also expressive enough to make
recursive functions using a technique called the Y-combi-
nator, but for this McCarthy instead introduces the
label

notation (which arguably is a lot easier to understand).
This evaluation case considers expressions on the form
((label name lambda-expression) arguments).
def label(e, a):
# ((label name (lambda (p1 …) body)) arg1 …)
_, f, fn = e[0]
args = e[1:]
return eval([fn] + args, [(f, e[0])] + a)
We handle this by extending the environment such that
name
points to the first element of
e
, i.e. the
label
expres-
sion. The
lambda
function is then applied to the rest of the
elements of
e
(the arguments) in this environment, and the
value returned.
Lets se an example:
>>> program = """
... ((label greet (lambda (x)
... (cond ((atom x)
... (cons 'hello (cons x 'nil)))
... ('t (greet (car x))))))
... '(world))
... """
>>> interpret(program)
'(hello world)'
TAKING THE LISP FOR A TEST RUN
And with that, we have enough of Lisp implemented to be
able to start using it. Lets define a few functions.
We start with something simple, a function for checking
whether lists are empty or not.
(defun null (x)
(eq x 'nil))
>>> env
[('pair', ['label', 'pair', ['lambda', ['x', 'y'], ['cons', 'x', ['cons', 'y', ['quote', 'nil']]]]])]
>>> interpret("(pair 'a 'b)", env)
'(a b)'
JON
SKEET
SCOTT
GUTHRIE
DAMIAN
EDWARDS
JUVAL
LOWY
GARY
BERNHARDT
ROBERT C.
MARTIN
ALEX
RAUSCHMAYER
RICHARD
CAMPELL
JESSICA
KERR
MICHELE
BUSTAMENTE
MIKE
COHN
JEN
MEYER
Check our website for agenda and more speakers
ndc-london.com
VENKAT
SUBRAMANIAM
JEZ
HUMBLE
Buy your tickets now!
GET UPDATED!
Agile & Development
18
With this, we are ready to implement the
eval
function:
As you see, the Lisp version of
eval
is very similar to the
one we implemented in Python, both in structure and how
it works. Lets see a couple of examples.
We have the Lisp implemented in terms of the Lisp itself.
SUMMARY
We have now seen a full implementation of the
original Lisp. The final result is, of course, avail-
able on github
[4]
.
The core of the language is pretty small. Given
only a handful of axiomatic forms, implemented
in Python, we were actually able to implement the
rest of the language in itself. This implementation
even included an
eval
function, able to interpret
any new Lisp code.
Of course, while being a neat little language, our
Lisp is missing a lot of features we expect in pro-
gramming languages today. For example, it has no
side effects (no IO), no types other than atoms
(e.g. no numbers, strings, etc), no error handling,
and it has dynamic rather than lexical scoping.
The behaviour is also undefined for incorrect pro-
grams, and as an effect of this the error messages
(which would bubble up from Python) can be rather
strange and uninformative at times.
Most of this could easily be rectified, though.
Either from within the Lisp itself, or by changing
the Python implementation. To learn about some
of the improvements that could be made, notably
lexical scoping and mutable state, I reccomend
to have a look at The Art of the Interpreter
[5]
by
Steele and Sussman.
I hope this article have peaked your interest in how
programming languages work, and that you find
the implementation of Lisp as delightful as I do.
REFERENCES
[1]
McCarthy, John. "Recursive functions of symbolic
expressions and their computation by machine,
Part I." Communications of the ACM 3.4 (1960):
184-195.
[2]
http://www.paulgraham.com/rootsoflisp.html
[3]
https://github.com/kvalle/root-lisp/blob/
master/rootlisp/parser.py
[4]
https://github.com/kvalle/root-lisp
[5]
Steele Jr, Guy Lewis, and Gerald Jay Sussman.
"The Art of the Interpreter of the Modularity
Complex (Parts Zero, One, and Two)." (1978).
During the day, Kjetil Valle
writes code for BEKK. At
night, the programming
languages nerd emerges.
Then, you'll find him
somewhere playing around
with a new language,
trying to understand some
concept. He really believes
that learning a new
paradigm will change the
way you think, no matter
what language you happen
to be using.
(defun eval (exp env)
(cond
((atom exp) (assoc exp env))
((atom (car exp))
(cond
((eq (car exp) 'quote) (cadr exp))
((eq (car exp) 'atom) (atom (eval (cadr exp) env)))
((eq (car exp) 'eq) (eq (eval (cadr exp) env)
(eval (caddr exp) env)))
((eq (car exp) 'car) (car (eval (cadr exp) env)))
((eq (car exp) 'cdr) (cdr (eval (cadr exp) env)))
((eq (car exp) 'cons) (cons (eval (cadr exp) env)
(eval (caddr exp) env)))
((eq (car exp) 'cond) (evcon (cdr exp) env))
('t (eval (cons (assoc (car exp) env)
(cdr exp))
env))))
((eq (caar exp) 'label)
(eval (cons (caddar exp) (cdr exp))
(cons (pair (cadar exp) (car exp)) env)))
((eq (caar exp) 'lambda)
(eval (caddar exp)
(append (zip (cadar exp) (evlis (cdr exp) env))
env)))))
(defun evcon (c env)
(cond ((eval (caar c) env)
(eval (cadar c) env))
('t (evcon (cdr c) env))))
(defun evlis (m env)
(cond ((null m) 'nil)
('t (cons (eval (car m) env)
(evlis (cdr m) env)))))
> (eval '(cons x '(b c)) '((x a) (y b)))
(a b c)
> (eval '(f '(bar baz)) '((f (lambda (x) (cons 'foo x)))))
(foo bar baz)
20
F
#
FOR DATA SCIENCE
Why are we choosing F
#
for working with data? Firstly, F
#

is succinct and has a great interactive programming mode,
so we can easily run our analysis as we write it. Secondly, F
#

has great data science libraries. After creating a new "F
#

Tutorial" project, you can import all of them by installing
NuGet package
FsLab
.
We'll write our code interactively, so let's open
Tutorial.fsx

and load the FsLab package:
#load "packages/FsLab.0.0.1-beta/lib/FsLab.fsx"
open System
open FSharp.Charting
open FSharp.Data
open Deedle
open RProvider
You'll need to update the number in the
#load
command. The
next lines open the namespaces of four F
#
projects that
we'll use in this article:
• F
#
Charting is a simple library for visualization and
charting based on .NET charting API
• Deedle is a data frame and time series manipulation
library that we'll need to combine data from different
sources and to get basic idea about the data structure
• F
#
Data is a library of type providers that we'll use to
get information about US presidents and to read US debt
data from a CSV file
• R type provider makes it possible to interoperate with
statistical system named R, which implements advanced
statistical and visualization tools used by professional
statisticians
READING GOVERNMENT DEBT DATA
To plot the US government debts during different presiden-
cies, we need to combine two data sources. The easy part is
getting historical debt data - we use a CSV file downloaded
from usgovernmentspending.com. The F
#
Data library also
makes it easy to get data from the World Bank, but sadly,
World Bank does not have historical US data. Reading CSV
file is equally easy though:

type UsDebt = CsvProvider<"C:\Data\us-debt.csv">
let csv = UsDebt.Load("C:\Data\us-debt.csv")
let debtSeries =
series [ for row in csv.Data ->
row.Year, row.``Debt (percent GDP)``

]
The snippet uses CSV type provider to read the file. The
type provider looks at a sample CSV file (specified on the
first line) and generates a type
UsDebt
that can be used to
read CSV files with the same structure. This means that
when loading the data on the second line, we could use a
live URL for the CSV file rather than a local copy.
The next line shows the benefit of the type provider - when
iterating over rows, the
row
value has properties based on
the names of columns in the sample CSV file. This means
that `
`Debt (percent GDP)``
is statically checked. The
compiler would warn us if we typed the column name incor-
rectly (and IntelliSense shows it as an ordinary property).
The double-backtick notation is just an F
#
way of wrapping
arbitrary characters (like spaces) in a property name.
Finally, the notation
series [ .. ]
turns tha data into a Dee-
dle time series that we can turn into a data frame and plot:
let debt = Frame.ofColumns [ "Debt" => debtSeries ]
Chart.Line(debt?Debt)
The first line creates a data frame with single column named
"Debt". A data frame is a bit like database table - a 2D data
structure with index (here years) and multiple columns. We
started with just a single column, but will add more later.
Once we have data in a data frame, we can use
debt?Debt
to
get a specified column. By passing the series to
Chart.Line

we get a chart that looks like this:
These days, you can get access to data about almost anything you like. But
to turn the raw data into useful information is a challenge. You need to link
data from different sources (that are rarely polished), understand the data
set and build useful visualizations that help explaining the data. This is
becoming an important task for increasing number of companies, but for
the purpose of this article, we'll work for the public good and become data
journalists. Our aim is to understand US government debt during the 20th
century and see how different presidents contributed to the debt.
Understanding
the world with
F
#
21
© Sergey Nivens/ Shutterstock
By Tomas Petricek
22 23
The data frame structure is mostly immutable, with the only exception - it is possible
to add and remove columns. The above line adds a column
Difference
that is calculated
by subtracting the previous debt value from the current debt value. If you now evaluate
endDebt
in F
#
interactive (select it and hit Alt+Enter or type
endDebt;;
in the console)
you will see the following table:
One of the main benefits of using the Doodle library is that we do not explicitly align the
data. This is done automatically based on the index. For example, the library knows that
Difference
column starts from the second value (because we do not have the previous
debt for McKinley).
LISTING US PRESIDENTS
To get information about US presidents, you could go to
Wikipedia and spend the next 5 minutes typing the data. Not
a big deal, but it does not scale! Instead, we'll use another
type provider and get the data from Freebase. Freebase is
a collaboratively created knowledge base - a bit like Wiki-
pedia, but with well-defined data schema.
The F
#
type provider for Freebase exposes the data as F
#
types with properties, so we can start at the root and look
at
Society
,
Government
and
US Presidents
:
The code uses F
#
implementation of LINQ to order the
presidents by their number and skip the first 23. This gives
us William McKinley whose presidency started in 1897 as
the first one. As usual in LINQ, the query is executed on the
servere-side (by Freebase). The next step is to find the start
and end year of the terms in the office.
Each object in the
presidentInfos
has a type representing
US politician, which has the
Government Position Held
prop-
erty. This means that we can iterate over all their official
positions, find the one titled "President" and then get the
From
and
To
values:
The code uses F
#
sequence expressions, which are quite
similar to C
#
iterators. When we have a president and a
position, we can get the years of
From
and
To
values. The only
difficulty is that
To
is
null
for the current president - so we
simply return 2013.
Now we have the data as an ordinary F
#
list, but we need
to turn it into a Deedle data frame, so that we can combine
it with the debt numbers:
let presidents =
presidentTerms
|> Frame.ofRecords
|> Frame.indexColsWith ["President"; "Start"; "End"]
The function
Frame.ofRecords
takes a collection of any .NET
objects and creates data frame with columns based on the
properties of the type. We get a data frame with
Item1
,
Item2

and
Item3
. The last line renames the columns to more useful
names. Here, we also use the pipelining operator
|>
, which
is used to apply multiple operations in a sequence.
If you now type
presidents
in F
#
Interactive, you'll see a
nicely formatted table with the last 20 US presidents and
their start and end years.
ANALYSING DEBT CHANGE
Before going furter, let's do a quick analysis
to find out how the government debt changed
during the terms of different presidents. To
do the calculation, we take the data frame
presidents
and add debt at the end of the
term.
The data frame provides a powereful "joining" operation
that can align data from different data frames. To use it, we
need to create a data frame like
presidents
which has the
end of the office year as the index (the index of the frame
created previously is just the number of the row):
let byEnd = presidents |> Frame.indexRowsInt "End"
let endDebt = byEnd.Join(debt, JoinKind.Left)
The data frame
byEnd
is indexed by years and so we can
now use
Join
to add
debt
data to the frame. The
JoinKind.
Left
parameter specifies that we want to find a debt value
for each key (presidency end year) in the left data
frame. The result
endDebt
is a data frame containing
all presidents (column
President
) together with the
debt at the end of their presidency (column
Debt
).
Now we add one more column that represents the
difference between the debt at the of the presi-
dency and the debt at the beginning. This is done by
getting the endDebt?Debt series and using an opera-
tion that calls a specified function for each pair of
consecutive values:
Year President Start End Debt Difference
1901 William McKinley 1897 1901 18.60 <missing>
1909 Theodore Roosevelt 1901 1909 18.65 0.050
1913 William Howard Taft 1909 1913 18.73 0.080
1921 Woodrow Wilson 1913 1921 45.10 26.37
1923 Warren G. Harding 1921 1923 38.95 -6.15
1929 Calvin Coolidge 1923 1929 32.25 -6.7
1933 Herbert Hoover 1929 1933 73.77 41.52
1945 Franklin D. Roosevelt 1933 1945 124.1 50.39
1953 Harry S. Truman 1945 1953 79.03 -45.1
1961 Dwight D. Eisenhower 1953 1961 67.49 -11.5
1963 John F. Kennedy 1961 1963 64.00 -3.49
1969 Lyndon B. Johnson 1963 1969 50.72 -13.2
1974 Richard Nixon 1969 1974 46.01 -4.71
1977 Gerald Ford 1974 1977 47.55 1.54
1981 Jimmy Carter 1977 1981 43.46 -4.09
1989 Ronald Reagan 1981 1989 66.89 23.43
1993 George H. W. Bush 1989 1993 80.52 13.63
2001 Bill Clinton 1993 2001 71.17 -9.35
2009 George W. Bush 2001 2009 104.47 33.3
2013 Barack Obama 2009 2013 124.84 20.37
let fb = FreebaseData.GetDataContext()
let presidentInfos =
query { for p in fb.Society.Government.``US Presidents`` do
sortBy (Seq.max p.``President number``)
skip 23 }
endDebt?Difference <-
endDebt?Debt |> Series.pairwiseWith (fun _ (prev, curr) -> curr - prev)
let presidentTerms =
[ for pres in presidentInfos do
for pos in pres.``Government Positions Held`` do
if string pos.``Basic title`` = "President" then
// Get start and end year of the position
let starty = DateTime.Parse(pos.From).Year
let endy = if pos.To = null then 2013 else
DateTime.Parse(pos.To).Year
// Return three element tuple with the info
yield (pres.Name, starty, endy) ]
24 25
together with a series
chunkDebts
. We turn each chunk into an area chart with the cor-
responding president's name as a label. Finally, all charts are combined into a single one
(using
Chart.Combine
), which gives us the following result:
VISUALIZING DATA USING R
Aside from using great F
#
and .NET libraries, we can also interoperate with a wide range
of other non-.NET systems. In our last example, we have a quick look at the statistical
system R. This is an environment used by many professional data scientists - thanks to
the R type provider, we can easily call it from F
#
and get access to the powerful libraries
and visualization tools available in R.
The installed R packages are automatically mapped to F
#
namespaces, so we start by
opening the
base
and
ggplot2
packages:
open RProvider.``base``
open RProvider.ggplot2
The
ggplot2
package is a popular visualization and data exploration library. We can use
it to quickly build a chart similar to the one from the previous section. To do this, we
need to construct a dictionary with parameters and then call
R.qplot
(to build the chart)
followed by
R.print
(to display it):
namedParams [
"x", box aligned.RowKeys
"y", box (Series.values aligned?Debt)
"colour", box (Series.values infos)
"geom", box [| "line"; "point" |] ]
|> R.qplot
|> R.print
The
namedParams
function is used to build a dictionary of arguments for functions with
variable number of parameters. Here, we specify
x
and
y
data series (years and corre-
sponding debts, respectively) and we set
colour
parameter to the series with president
PLOTTING DEBT BY PRESIDENT
Our next step is to visualize the combined data. We want to draw a chart similar to the
one earlier, but with differently coloured areas, depending on who was the president
during the term. We'll also add a label, showing the president's name and election year.
To do this, we build a data frame that contains the debt for each year, but adds the name
of the current president in a separate column. This means that we want to repeat the
president's name for each year in the office. Once we have this, we can group the data
and create a chart for each group.
First, we need to use
Join
again. This time, we use the
Start
column as the index for each
president and then use left join on the
debt
data frame. This means that we want to find
the current president for every year since 1900:
The data frame
byStart
only contains values for years when the president was elected.
By specifying
Lookup.NearestSmaller
, we tell the
Join
operation that it should find the
president at the exact year, or at the nearest smaller year (when we have debt for a given
year, the president is the most recently elected president).
This means that
aligned
now has debt and a current president (together with his or her
start and end years) for each year. To build a nicer chart, we make one more step - we
create a series that contains the name of the president together with their start year
(as a string) to make the chart labels more useful:
The snippet uses
aligned.Rows
to get all the rows from the data frame as a series (con-
taining individual rows as nested series). Then it formats each row into a string containing
the name and the start. The
GetAs
method is used to get column of a specified type and
cast it to an appropriate type (here, the type is determined by the format strings
%s
and
%d
, because
sprintf
is fully type checked).
The last step is to turn the
aligned
data frame into chunks (or groups) based on the
current president and then draw a chart for each chunk. Because the data is ordered,
we can use
Series.chunkWhile
which creates consecutive chunks of a time series. The
chunks are determined by a predicate on keys (years):
In our example, the predicate checks that the president for the first year of the chunk
(y1) is the same as the president for the last year of the chunk (y2). This means that we
group the debt values (obtained using aligned?Debt) into chunks that have the same
president. For each chunk, we get a series with debts during the presidency. Now we
have everything we need to plot the data:
chunked
|> Series.observations
|> Seq.map (fun (startYear, chunkDebts) ->
Chart.Area(chunkDebts, Name=infos.[startYear]))
|> Chart.Combine
The snippet takes all observations from the chunked series as tuples and iterates over
them using standard
Seq.map
function. Each observation has
startYear
of the chunk
let byStart = presidents |> Frame.indexRowsInt "Start"
let aligned = debt.Join(byStart, JoinKind.Left, Lookup.NearestSmaller)
let infos = aligned.Rows |> Series.map (fun _ row ->
sprintf "%s (%d)" (row.GetAs "President") (row.GetAs "Start"))
let chunked = aligned?Debt |> Series.chunkWhile(fun y1 y2 ->
infos.[y1] = infos.[y2])
26
names (so that the colour is determined by the president). We also instruct the function
to plot the data as a combination of "line" and "point" geometries, which gives us the
following result:
SUMMARY
With a bit of more work, we could turn our analysis into a newspaper article. The only
missing piece is breaking the data down by the political parties. If you play with the
Freebase type provider, you'll soon discover that there is a
Party
property on the object
representing presidents, so you can get the complete source code from fssnip.net/kv
and continue experimenting!
We used a number of interesting technologies throughout the process. F
#
type provid-
ers (for Freebase and CSV) make external data first-class citizens in the programming
language and we were able to explore the data sources from code. Next, we used Deedle
(Dotnet Exploratory Data Library) to align and analyse the data and F
#
Charting to
visualize the results and we also looked at interoperating with the R system. The range
of available libraries and tools, together with the succinctness of the language makes
F
#
the perfect data science toolset.
Tomas is a long-time F
#
enthusiast, Microsoft MVP and
author of a book Real-World Functional Programming. He
leads functional programming and F
#
courses in London and
New York and contributed to the development of F
#
as an
intern and contractor at Microsoft Research in Cambridge.
He is finishing PhD at University of Cambridge, working on
functional programming languages.
CONVERT
PRINT
CREATE
COMBINE
& MODIFY
100% Standalone - No Office Automation
many MORE
SCAN FOR
20% SAVINGS
28 29
All these dependencies hinder the ability to write unit tests. When a complex
setup is needed in order for the test to run, the end result is fragile tests that
tend to break, even if the code under test works perfectly. But don't despair
- there is a solution for test authors - and it’s called Mocking (or Isolation).
BEFORE WE START - A UNIT TEST
Unit tests are short, atomic and automatic tests that have a clear pass/fail
criteria. When writing a test in one of .NET's unit testing frameworks - we write
a function decorated by a test attribute. That test is run using a test runner. By
writing unit tests, developers can make sure their code works, before passing
it to QA for further testing. Let’s say we want to write a test for a class that
handles user operations. And let’s pretend our class needs to verify a user’s
password.
Writing a unit test for that method would look something like this:
[Test]
public void CheckPassword_ValidUserAndPassword_ReturnTrue()
{
UserService classUnderTest = new UserService();
bool result = classUnderTest.CheckPassword("President Skroob",
"12345");
Assert.IsTrue(result);
}
By writing additional tests to check other aspects of the CheckPassword
method we can make sure it works as required by our specification.
The problem starts when we need to run code that needs to do more than just
get two values and return one value - in other words the problem starts we
need to test in the real world.
MOCKING
EXPLAINED
There's a problem that every developer who ever
wrote a unit test knows about – real code has
external dependencies. Writing real unit tests is
hard because real code might need a fully popu-
lated database, or perhaps calls a remote server;
or maybe there is a need to instantiate a complex
class created by someone else.
By Dror Helper
© Mopic/ Shutterstock
30 31
Now we have a new class we can use in the “new and
improved” unit test which calls our dummy database:
At first this looks like cheating - it seems that the test
does not use the production code fully, and therefore does
not really test the system properly. But keep in mind that
we're not interested in the working of the data access in
this particular unit test; all we want to test is the business
logic of the UserService class.
In order to test the actual data access we should write
additional tests; these ones test that we read and write
data correctly into our database. These are called integra-
tion tests.
Using the DummyDataAccess class achieves these goals:
• The test does not need external dependencies
(i.e. a database) to run.
• The test will execute faster because we do not perform
an actual query.
The problem with this approach is that we’ve created more
code in the form of a new class we need to maintain. In
my experience the simple fake class created yesterday,
becomes a maintenance nightmare of tomorrow – due to
the following:
1. Adding new methods to an existing interface
lets go back to the example from the beginning of this
article. What happens when we add a new method to the
IDataAccess interface?
We need to also implement the new method in the fake
object (usually we’ll have more than one, so we’ll need to
implement in the other fakes too).
As the interface grows, the fake object is forced to add
more and more methods that are not really needed for a
particular test just so the code will compile. That’s usu-
ally necessary work, with almost no value.
2. Adding new functionality to a base class
one way around the method limitation is to create a real
class and derive the fake object from it, only faking the
methods needed for the tests to pass. Sometimes it can
prove risky, though. The problem is that once derived,
our fake objects have fields and methods that perform
real actions and could cause problems in our tests. An
example I encountered in the past showed this exact sce-
nario: A hand rolled fake was inherited from a production
class: it had an internal object opening a TCP connection
upon creation. This caused very strange failures in my
unit tests. In this case, the team wasted time because
of the way the fake object was created.
3. Adding new functionality to our fake object
as the number of tests increases, we’ll be adding more
functionality to our fake object. For some tests method
X returns null, while for other tests it returns a specific
object. As the needs of the tests grow and become dis-
tinct, our fake object adds more and more functionality
until it becomes so complicated that it may need unit
testing of its own.
All of these problems require us to look for a more robust,
industry grade solution - namely a mocking framework.
MOCKING FRAMEWORKS
A mocking framework (or isolation framework) is a 3rd
party library, which is a time saver. In fact, comparing the
saving in code lines between using a mocking framework
and writing hand rolled mocks, for the same code, can go
up to 90%!
Instead of creating our fake objects by hand, we can use
the framework to create them, with a few API calls.
Each mocking framework has a set of APIs for creating and
using fake objects, without the user needing to maintain
details that are irrelevant to the test - in other words, if a
fake is created for a specific class, when that class adds a
new method nothing needs to change in the test.
One final remark: a mocking framework is just like any
other piece of code and does not "care" which unit testing
framework is used to write the test it's in.
There are many such mocking frameworks in the .NET
world open source and commercial - some create a fake
object at run-time, others generate the needed code dur-
ing compilation, and yet others use method interception
to catch calls to real objects and replace these with calls
to a fake object. Obviously the framework’s technology
dictates its functionality.
PROBLEMS IN UNIT TESTS PARADISE
What happens when the method we're testing requires
access to a database in order to check if a user entry
exists? Let’s revise the code under test from the example
above - this time with data access.
Now we need to create a new database, populate it with
"real" data, run the test, and then clean/restore the data-
base. For each test run - a time consuming endeavor.
Another problem is that external dependency makes the
test more "fragile" and soon become a maintenance head-
ache if not handled correctly.
If a test is dependent on specific environment on a spe-
cific PC it could fail when some factor of that environment
change. The test would fail "some of the time" - these tests
cannot be trusted, they fail to find real bugs and after a
while the developer learn to ignore their failures - com-
pletely defeating the objectives writing unit tests.
HAND ROLLED MOCKS - AND WHY YOU SHOLD AVOID THEM
Usually when faced with such a problem developers tends
to solve it by writing more code. In this case it’s quite sim-
ple to replace the “real” data access with dummy class that
would look something like this:
public class DummyDataAccess : IDataAccess
{
private User _returnedUser;
public DummyDataAccess(User user)
{
_returnedUser = user;
}
public User GetUser(string userName)
{
return _returnedUser;
}
}
public class UserService
{
private IDataAccess _dataAccess;
public UserService(IDataAccess dataAccess)
{
_dataAccess = dataAccess;
}
public bool CheckPassword(string userName, string password)
{
User user = _dataAccess.GetUser(userName);
if (user != null)
{
if (user.VerifyPassword(password))
{
return true;
}
}
return false;
}
}
public void CheckPassword_ValidUserAndPassword_ReturnTrue()
{
User userForTest = new User("President Skroob", "12345");
IDataAccess fakeDataAccess = new DummyDataAccess(userForTest);
UserService classUnderTest = new UserService(fakeDataAccess);
bool result = classUnderTest.CheckPassword("President Skroob", "12345");
Assert.IsTrue(result);
}
32
For example: if a specific framework works by creating
new objects at run-time using inheritance, then that frame-
work cannot fake static methods and objects that cannot
be derived. It's important to understand the differences
between frameworks, prior to committing to one. Once
you build a large amount of tests, replacing a mocking
framework can be expensive.
WHAT TO EXPECT FROM YOUR MOCKING FRAMEWORK
Any mocking framework should support at least the fol-
lowing three capabilities:
1. Create fake object
A mocking framework creates a fake object to be used in
a test. Usually a default behavior can be set in this stage.
In order to configure more detailed function behavior
additional code is required.
2. Set behavior on fake objects
After we’ve created the fake object more often than
not we will need to set specific behavior on some of its
methods.
3. Verify methods were called
Also known as interaction testing. The mocking frame-
work gives us the power to test that a specific method
was called (or not called), in which order, how many times
and with which arguments.
Some mocking frameworks have additional functional-
ity, such as the ability to invoke events on the fake object
or cause the creation of a specific fake object inside the
product code. However these three basic capabilities
are the core functionality expected from every mocking
framework. Additional features should be compared and
checked when deciding which mocking framework to use.
SUMMARY
Unit testing has many benefits when used as part of the
development process. The early feedback you receive from
your tests help you avoid bugs and increase confidence in
your code. By writing and running unit tests you can make
sure that new bugs weren't introduced, and that you gave
QA actual working code.
Mocking frameworks are essential tools for writing unit
tests. In fact, without a tool like this, you’re bound to fail
in your effort - either you won’t have unit tests that give
early feedback, or no tests at all.
This is why deciding on a mocking framework or some other
similar solution is as important as deciding the unit testing
framework used. Once you pick a framework, master it. It
helps make your unit testing experience easy and successful.
Dror Helper is a senior consultant working at CodeValue. In
his previous jobs as software developer and an architect he
has designed and written software in various fields including
video streaming, eCommerce, performance optimization and
unit testing tools. Dror speaks in local and international
venues about software development, agile methodologies,
unit testing and test-driven design. In his blog Dror writes
about programming languages, development tools, and
anything else he finds interesting.
Opportunities for
Talented Programmers /
Developers
Delivering excellence through technology
Get Started. Apply Now at:
technologycareers@baml.com
In the evolving, challenging and competitive world of Global Investment
Banking, technology is at the centre of all we do. At Bank of America
Merrill Lynch we want our teams throughout the globe to have access to
the very best tools, systems and infrastructure. Our technology teams
are at the forefront of making this happen.
There are opportunities within Bank of America Merrill Lynch for
exceptional and talented programmers, ideally with a working knowledge
of Python, Scala, Java, C# and KDB/Q and experience of agile practices
e.g. test driven development, continuous integration, clean code.
Our programmers have huge input into our most innovative and ground
breaking projects. Our teams are empowered and collaborative in
achievement of results. We operate to the highest standards, providing
our business with continuous and high value delivery through the
practice of lean methodology.
If you are looking to work at the forefront of technology
innovation in investment banking then let’s connect.
As an equal opportunity employer we will consider applications from
all backgrounds.
Real Connections. Global Reach.
34 35
1. INJECTION
“Injection” is a common term for sev-
eral kinds of vulnerabilities. In gen-
eral, untrusted data is sent to a kind
of interpreter. The most commonly
found kind of injection is SQL Injec-
tion, where user input is injected into
a SQL query. Although this vulnera-
bility is at the top spot, it is quite rare
in ASP.NET applications, since not
only does the technology supports
prepared statements, but because
Microsoft pushes hard their own
OR/M solution, Entity Framework.
This separates data form commands,
so actually developers have to try
really hard to allow SQL injection.
2. BROKEN AUTHENTICATION AND
SESSION MANAGEMENT
HTTP does not support any kind of
decent session management, so the
programming frameworks need to
help out. ASP.NET’s session manage-
ment is solid, as long as you do not
activate cookie-less sessions. ASP.
NET also uses security enhancement
for the session cookies, including the
HttpOnly flag, which is baked into the
SessionIDManager class and cannot
be changed, which is a good thing.
3. CROSS-SITE SCRIPTING (XSS)
This security vulnerability is prob-
ably the most well-known: user input
is sent back to the client and will
then be interpreted as, for instance,
JavaScript code. When using ASP.
NET Web Forms, a typical example is
setting the Text property of a Label,
or using the <%= short form. In the
former case, escaping the output
with HttpUtility.HtmlEncode() helps,
and in the latter case <%: does prac-
tically the same. For ASP.NET MVC,
the Razor syntax uses @ for output,
for similar results. Note, though,
that you need additional escaping
when outputting user data in CSS or
JavaScript markup.
4. INSECURE DIRECT OBJECT
REFERENCES
In some applications, changing a
parameter value grants access to
something the current user is not
allowed to see. This is however inde-
pendent on the technology use, so
you have to fix this at the applica-
tion level.
5. SECURITY MISCONFIGURATION
It is mandatory to keep the operating
system and also the framework itself
up-to-date, and not to install super-
fluous components on the server:
Also, ASP.NET configuration options
that are considered insecure (plain
text passwords for the Membership
API, the aforementioned cookie-
less sessions, and so on) should be
avoided.
6. SENSITIVE DATA EXPOSURE
Sensitive data is called that way
because it should not be made avail-
able to the wrong entities. In order to
help this effort, a proper encryption
is mandatory, both on the transport
level (SSL is a must) and wherever the
data is stored (strong algorithms are
required).
7. MISSING FUNCTION LEVEL
CONTROL
There are several ways to protect
server functionality from being
accessed by the wrong users. You
could keep the URLs secret, or you
could control the access on function
(or, if applicable, on controller) level.
Obviously, only the latter of these
two approaches is advisable.
8. CROSS-SITE REQUEST
FORGERY (CSRF)
Surprisingly late in the top ten list we
find the common vulnerability most
often exploited by letting unsus-
pecting users run HTTP queries
against another server. The target
server automatically receives the
user’s cookies for that server, thus
identifies him or her, and executes
the action for this request, without
the user actually knowing (or want-
ing). The web application may help
mitigate this risk, if GET requests
are never used for state-changing
actions. For POST and other HTTP
verbs, security mechanisms such as
random tokens within each form may
be used. ASP.NET MVC, for instance,
provides the Html.AntiForgeryTo-
ken() helper and the corresponding
[ValidateAntiForgeryToken] attribute
for the associated action.
9. USING COMPONENTS WITH
KNOWN VULNERABILITIES
As mentioned in top ten list item 5,
the system needs to be up-to-date,
especially if there are known security
vulnerabilities. One good reason for
this is the “zero-day” security vulner-
ability (an issue that was exploited
before there was a patch) the ASP.
NET project suffered from in late
2011, when an issue in the frame-
work could allow a denial of service
attack.
[3]
A patch followed shortly
afterwards, outside the usual “sec-
ond Tuesday of the month” patch
cycle.
10. UNVALIDATED REDIRECTS AND
FORWARDS
When uses click on a link, many
of them usually only look at the
domain name, not the URL follow-
ing. If a website provides open
URLs in the form of http://server.
tld/redirect?url=http://otherserver.
tld, this may be used for phishing
attacks or worse. If your application
has a redirection mechanism, always
verify (white-list, if possible) the tar-
get URL.
More on these vulnerabilities, more
in-depth details and some additional
attacks will be featured in my OWASP
Top Ten 2013 talk at NDC London.
Looking forward to seeing you there!
REFERENCES
[1]
https://www.owasp.org/index.
php/Main_Page
[2]
https://www.owasp.org/index.php/
Category:OWASP_Top_Ten_Project
[3]
http://technet.microsoft.com/
en-us/security/advisory/2659883
The Open Web Application Security Project (OWASP)
[1]
is a non-for-
profit organization of volunteers that provide guidance, best practices,
tools and more on various topics of web application security. Probably
the best-known information OWASP provides is the Top Ten List
[2]
that
is released in a new version every three years. This article lists the secu-
rity vulnerabilities identified in the 2013 edition and what that means
for ASP.NET developers.
Christian Wenz started working almost
exclusively with web technologies in 1993
and has not looked back since. As a
developer and project manager he was
responsible for websites from medium–
sized companies up to large, international
enterprises. As an author, he wrote or co–
wrote over 100 books that have been
translated into ten languages. Christian
frequently writes for various IT magazines,
is a sought–after speaker at developer
conferences around the world, and is
always keen on sharing technologies he is
excited about with others.
By Christian Wenz
35
OWASP Top Ten
36 37
I propose the Security of an application can be done with
more efficiency (lower cost and/or more impact) if baked
in from the start. If Security is baked in from the start
key-design choices can be done to avoid wasted time and/
or select correct key components that are fundamentally
advantageous to Security.
The difference between an insecure and secure applica-
tion is sometimes as little as use Crypto “A” not Crypto
“B”, have your communications system run to central
nodes, locate critical logic behind protected servers, build
detection points, build security unit tests, build disaster
plans, properly defend critical keys/data………………..
All of this is fairly simple to design in at the start of the
planning process but can be nearly impossible to change
post-deployment.
Key points of the system, such as what systems talk to
each other, can make it easy to defend/detect attackers
or near impossible to distinguish normal User traffic from
MalUser traffic.
A good security expert will get to know your product,
user base, development team, management style, code
base, system dependencies, network…… This will allow
for recommendations to be put in a proper context with
clear development paths. Some security fixes should be
done in a very specific way or cannot be done or some
are hard or some should be done at a later time/context.
Lastly - Having a clear implementation plan for a security
fix is critical. A security expert can perform an evalua-
tion finding security weak spots, but unless you have an
expert on staff focused on how to implement a fix the
solution could be worse than the original problem.
A common problem example is: Your passwords should be
over 7 or 14 char long, they should have */Y/z chars and
be changed every 20-100 days. This is nice to have and it
does increase security.
A BAD WAY TO HARDEN PASSWORDS:
• Have IT set the enforcement of strong passwords.
• Then people write down the password and don’t lock
computers.
• Then you fight back, against your users, by enforce-
ment of computer auto time locks.
This descends into a fight between IT-&-Security and
your users. Security should always make security
that works in the real world if users fight against it,
I believe, the security/system should change.
A GOOD WAY TO HARDEN PASSWORDS:
-----A Yubikey(www.Yubico.com) is a tiny fake-keyboard
that can remember a password -----
• Buy Yubikeys in bulk and give them out like/with
badges
• The Yubikey should sit on our keychain or badge
• The Yubikey should be changed AutoMagicly every
X-Days
• Then people start to lose their key, you respond with
a policy: If you lose your key it is a “good event” call-in
and we will cancel your key and you can pick one up at
the X-desk
• After a week or month you send a note out about the
keys: “We request you use a few characters with your
key/password (Example: “Jon”+PressKey+”Doe32”
or PressKey+”god32”+Presskey) this will create
Namejy45xwg8f87e9kw8gw87wcLast43 as your
password and next week it will auto rotate to be
Name#ky1d&8t@w^0etLast43
• TODO: Always test EVERYTHING on IT then technol-
ogy-savvy (opt-in users) users then roll it out to the
enterprise
This problem of password management and authentica-
tion is faced by most everyone today; literally 10’s
of thousands of solutions have been crafted and used.
The knowledge in this area is common place and
constantly evolving, if you live in the security world.
Jon McCoy is a .NET Software Engineer
that focuses on security and forensics.
He has worked on a number of Open
Source projects ranging from hacking
tools to software for paralyzed people.
37
Security Should
• Support and speed up dev/testing/release process
• Increase the overall reliability of the system
• Increase the overall confidence in the system
• Reduce the attack surface
Security Should NOT
• Slow the development cycle
• Halt production
• Revert progress
• Destroy momentum
Developers

Baking in Security

In the first moments of a project’s planning, basic choices are
maid that impact the security of the final product. Critical
choices of what data is accessed and the day-to-day work
flow are key to the security stance of the end deliverable.
By Jon McCoy
38 39
Prompted by this mass exodus there has been an influx
of interesting frameworks released into the browser
platform, Backbone, Angular, Ember amongst some of
the favourites - they seek to bring to the browser a lot
of what we have been used to in the desktop and mobile
development environments.
They purport to offer a lot of functionality out of the box;
separation of presentation from logic via various model/
view patterns, client-side routing, two-way binding via a
model, automatic binding to web services - the list goes
on. Very often because this is seen as desirable, the first
question a new JS developer will ask is "which framework
should I be using?".
As a long time enterprise developer and consultant, I have
seen first-hand the devastation caused by questions like
"Which ORM should I use? Which HTTP Framework should
I use? Which workflow framework should be use?" and
while this activity has led to the creation of plenty of jobs
for those cleaning up on these projects only recently have
the voices started being heard asking the question "Why
should we be using a framework at all?".
This growing dissent has led to the strong ethos behind
the NodeJS eco-system, a strong set of minimal core
modules surrounded by a plethora of community provided
modules with single responsibility at their core and a pow-
erful means of delivery in the form of NPM - the package
manager that could.
While the opinion is not universal, there is a strong con-
sensus in the NodeJS community that
• Modules should only try to solve a single problem
• Modules should be small enough to be understood by
most developers
• Frameworks should be avoided unless the short term
wins can be offset by the long-term pain
Often this is seen as a cry to re-invent the wheel, but
the appropriate metaphorical desire is actually that of
wishing to pick the components that make up that wheel
instead of trying to simply attach a monster truck tire
to our Fiat Punto.
STARTING A CLIENT-SIDE APPLICATION WITH NO
FRAMEWORKS
How does this relate to the world of client side frame-
works? Well - NPM is not just for server-side modules!
Using a client-side packaging utility such as Browserify,
creating a new client-side application is as simple as the
following instructions, assuming that NodeJS is installed.
First we'll need to install Browserify itself, which can be
done with NPM like so
npm install -g browserify
This gives us a command line utility which can be used to
compile a JS app into something consumable by the client.
If we're in a directory where we want to write some cli-
ent-side code, we should kick-start our development by
creating a package.json, which tells browserify and NPM
about our client application's dependencies. This can be
achieved with the command npm init, or by manually cre-
ating a file called package.json like so:
{
"name": "our-amazing-application",
"version": "0.0.0"
}
We create an application file called "app.js" next to the
package.json and leave it empty, because we need to go
onto the NPM website (http://npmjs.org/) and look for
our first library.
The first thing we want to do is know when the DOM is
ready, so we search for this and come up with a library
that does just this. "domready"
npm install domready --save
That save directive ensures that our package.json is
updated with the relevant dependencies automatically,
so it now looks like this:
{
"name": "our-amazing-application",
"version": "0.0.0",
"dependencies": {
"domready": "~0.2.13"
}
}
By Richard Astbury
Hello

JavaScript
JavaScript's time has come, and with that coming there is are big move-
ments happening in the enterprise away from platforms such as Silverlight
and WPF to more web-oriented environments.
By Rob Ashton
40 41
Going into our "app.js" file, we can now use this module
like so:
var domReady = require('domready')

function index() {
this.innerHTML = "<p>Welcome to the machine</p>"
}

domReady(function() {
var container = document.
getElementById('content')
index.call(container)
})
Now, this is fabulous, but we obviously want to do a bit
more than that - perhaps some templating with a library
like Mustache. Just like before can go onto NPM and
install this in the same manner:
npm install mustache --save
And now rather than directly playing with HTML we can
instead utilise our favourite templating library in our code
the same way we used domready.
var domReady = require('domready')
, mustache = require('mustache')

function index() {
var template = "<p>Hello {{Name}}"
this.innerHTML = mustache.render(template, {
name: "Bob"
})
}

domReady(function() {
var container = document.
getElementById('content')
index.call(container)
})
Maybe we want to get that data from a web server so
we can greet the current user by name, well NPM has the
answer for this as well!
npm install request --save
Now we can make a very simple web request to the server
and ask for the data we need before rendering the content:

var domReady = require('domready')
, mustache = require('mustache')
, request = require('request')

function index(model) {
var template = document.
getElementById('template').innerText
this.innerHTML = mustache.render(template,
model)
}

domReady(function() {
var container = document.
getElementById('content')

request('/currentuser', function(model) {
index.call(container, model)
})
})
And we can repeat this process for every feature our
application might need.
We can find libraries on NPM for routing, HTML5 history,
web request, testing, streaming, and parsing alongside
much more and build our application organically out of
just the bits we need as we need them.
THIS IS A PLEASURE
This is a very sensible way of building applications, and
one which reduces the dependency of the software being
built on the author of a single black-box framework and
the immediate overhead to understanding that this
involves.
It makes it far easier to contribute back to the module
system, as modules are provided in neat self-contained
that can often be read in their entirety in a few minutes
and patched to solve a problem in just a few minutes more.
Rather than spending time searching the internet on how
to deal with edge cases in a framework, we instead are
able to easily build a module to fill the gaps ourselves, or
download one somebody has written before.
So, next time somebody asks "which framework should
we use", we should respond to that question with another
question, "Why do you want a framework?" - and avoid
the pitfall of bad habits from the very beginning of the
project.
Rob splits his time between free
contracts that will teach him new things,
and paid work for his own consultancy
where he helps companies with RavenDB,
C
#
, JS and software practices in general.
When not learning or working, he can be
found building awful games in JavaScript
for the sheer joy of it. At weekends you'll
not find him because he is buried deep in
a world of Clojure.
www.greymatter.com
01364 654100
Get started with
RAD Studio today!
Scan the QR code or visit
http://embt.co/rad-studio
The multi-device, true native app development for Android and iOS
RAD Studio XE5 is app development for teams building true native apps for Android and iOS, and getting them to app
stores and enterprises fast. Build apps using the same source codebase without sacrifi cing app quality, connectivity or
performance. With native Android and iOS support, reach the largest addressable mobile markets in the world.
Client
Devices
MBaaS
Providers
Enterprise
Data
DataSnap
On-premises or in Cloud
Multi-device, true native
RAD Studio makes it easy to build complete end-to-end mobile
solutions and script-free, true native apps that expose the full range
of capabilities and performance in the underlying devices.
True native Android and iOS support
Develop apps for smartphones and tablets with the only development
platform that enables you to create true native apps for Android and
iOS from a single source codebase.
RAD Multi-Device Application Platform
Rapidly connect your apps to your enterprise databases and services
on-premises or in the cloud with FireDAC enterprise database
connectivity, DataSnap n-tier middleware, and access to cloud-based
RESTful web services and MBaaS providers.
One codebase, one team
With RAD Studio, your development team can both prototype and
develop native apps in standard C++ or Delphi for multiple devices
simultaneously. That means no need for multiple projects, schedules,
and budgets with separate development teams, tools, languages and
libraries to target each device platform.
grey matter
software know how
3070 Xe5 Advert_AW.indd 1
16/10/2013 16:03
42 43
In Co-Active Coaching, powerful
questions are defi ned as "provoca-
tive queries that put a halt to evasion
and confusion. By asking the power-
ful question, the coach invites the cli-
ent to clarity, action, and discovery at
a whole new level."
Vogt et al defi ne a powerful ques-
tion as thought-provoking, generat-
ing curiosity in the listener, surfacing
underlying assumptions, touching a
deep meaning and inviting creativity
and new possibilities.
This is good and helpful but still
leaves us with a certain amount of
ambiguity. Another diffi culty is that
a question that has a powerful impact
on one person at one particular time
may be less powerful, or not power-
ful at all, at a different time or with a
different person.
HOW CAN WE GET GOOD AT
SOMETHING SO ABSTRACT?
I have a technique that I like to use
in order to practice my skill of ask-
ing powerful questions. I have found
that, as well as being great practice
for me, it is also a useful coaching
technique in its own right and can
even be used by teams themselves
to help get past a tricky challenge.
I call it "hot-seat questioning" and
was introduced to it on a coaching
course with Barefoot Coaching a cou-
ple of years ago. As most of the best
techniques are, it is very simple yet
regularly profoundly effective and it
works in groups of 3-7 although 4-5
is generally optimum.
One person (person A) gets to talk
about something that they are fi nd-
ing diffi cult to make progress on for
around 60 seconds while everyone else
listens. At the end of the 60 seconds
the listeners get the chance to ask
one question that is designed to help
person A. Person A does NOT answer
these questions but rather notices how
the questions affect them.
Once everyone has asked person A
a question, person A then provides
feedback to the questioners on how
their questions affected them and/
or how they might have made their
questions more powerful.
Optionally, you may then allow eve-
ryone to ask another question each
(again with no answers being pro-
vided by person A)
After the feedback on the powerful
questions, another person gets the
chance to talk about their topic…
SOME POWERFUL QUESTIONS
FOR YOU
Once you have given this technique a
try, consider one or two of the ques-
tions below:
How did you fi nd talking about some-
thing knowing that you wouldn't have
to answer any follow up questions?
How did the fact that you knew your
question wouldn't get answered
affect your question?
How did it feel knowing you couldn't
answer the questions you were asked?
What patterns did you notice about
the questions that had the biggest
effect?
Why was (or wasn't) this technique
powerful for you?
How could you use something similar
to this technique with your teams?
WHAT IS A POWERFUL QUESTION? It's often really hard to defi ne which
is frustrating because it's such a useful, nay integral, part of a coach's
(and ScrumMaster's) toolbox. A coach will spend a lot of time asking ques-
tions rather than giving solutions and so wouldn't it be useful if we could
defi ne what a powerful question was and then practice the art of asking
them? It would not only make us better coaches but also be of more ben-
efi t to our coachees, and our teams.
Geoff Watts is a UK-based Scrum
and leadership coach, author of
Scrum Mastery: From Good to
Great Servant-Leadership and
will be delivering the opening
keynote talk at this year's Scrum
Gathering in Cape Town
By Geoff Watts
43
POWERFUL QUESTIONS
“If I had an hour to solve
a problem and my life
depended on the solution, I
would spend the fi rst 55 min-
utes determining the proper
question to ask, for once I
know the proper question,
I could solve the problem in
less than fi ve minutes.”
ALBERT EINSTEIN
44 45
By Mari Myhre
With a population of more than 8
million people, London is the most
populous region, urban zone and
metropolitan city in the United King-
dom. The city has a diverse range of
people and cultures, with more than
300 languages are spoken within its
boundaries. The city is filled with a
thrilling atmosphere and definitely
has something to offer for everyone,
whether you’re interested in arts, cul-
ture, shopping food or entertainment.
Here’s a little guide of London’s most
famous attractions:
BUCKINGHAM PALACE & THE
RIVER THAMES
Buckingham Palace is by far the most
popular tourist destination attract-
ing more than 15 million people
every year. People from all over the
world come to see the famous “guard
change” that takes place outside
the palace every day at 11.30am.
Many of the city’s other attractions
are situated along the banks of the
River Thames. Here you will find the
Tower of London where tourists can
get to see the remarkable Crowns of
England. The Houses of Parliament
followed by the fabulous clock tower,
Big Ben are also in same area. A few
blocks away you will find the London
Eye, Europe’s tallest Ferris wheel. It’s
the most popular paid tourist attrac-
tion in the UK visited by more than 3,5
million people every year, where you
can get to see the city from above.
THE BOROUGH MARKET
London is also well known for its dif-
ferent markets, such as the Borough
market, - a wholesale retail and food
market based in Southwark, cen-
tral London. It is one of the largest
and oldest food markets in the city
and offers a variety of foods from
all over the world. The market gives
you a different taste of the city and
has become a fashionable place to
buy fresh food in London. Amongst
the products on sale are fresh fruit,
vegetables, cheese, meat and freshly
baked breads. There is also a wide
variety of cooked food and snacks
for the many people who flock to the
market. Borough Market has also
been used as film locations for sev-
eral movies including the Harry Potter
series and Bridget Jones.
HYDE PARK
Hyde Park is one of the largest parks
in the capital, most known for the
Speakers Corners. The park has
become a traditional location for
mass demonstrations, and the venue
of several concerts. Hyde Park also
hosts the popular Christmas fair
Winter Wonderland every year, fea-
turing UK’s largest ice rink, a charming
Christmas market and a huge obser-
vation wheel. Winter Wonderland
is definitely London’s most magical
Christmas destination and absolutely
worth a visit if you’re in town later this
year.
WINE & DINE
The city is also packed with exclusive
restaurants, which offers you tastes
from all over the world. Whether you
fancy British, Indian, Italian or Scan-
dinavian, London has it all. The most
authentic American restaurant, Big
Easy is located in the heart of Chelsea.
The founder and owner, Paul Corrett
was a lover for all American things
and had a desire to open a place that
could offer “Good Food, Served Right”.
Big Easy brings you back to a simpler
time. With its lovely atmosphere it’s
the perfect place to sit back, relax and
enjoy a home cooked meal. The restau-
rant offers everything from American
steaks to freshly Canadian lobster. Big
Easy attracts movie stars, journalists,
politicians, the British royal family and
of course its regulars. Charming bars
and pubs are also located around each
corner in London. Princess Louise, a
pub based in Holborn is the place to be
for beers fanatics. However, if you’re
looking for the town’s best cocktails,
stop by Callooh Callay in Shoreditch.
London is unique compared to other
cities and definitely has something
new to offer no matter how many
times you have visited the city. It’s
the perfect destination whether you
are in town for business or pleasure.
And it’s probably true what the famous
English author Samuel Jackson once
said, “When you’re tired of London,
you’re tired of life”.
London is a diverse and exciting city with some of the world’s
best sights and attractions. The city attracts more than 30
million visitors from all over the world every year.
London

46 47
There has been plenty of praise from reviewers for the
“buttery smooth” animations that characterise the user
experience on Windows 8 and Windows Phone. They are
built right into the platform, from the way the tiles ani-
mate onto the Start screen when you start the app, and
of course are a fundamental feature of the ‘modern UI’
apps that you launch from there.
In fact, if you build a Windows 8.x Store app in Visual Studio
and create a couple of XAML pages in it, the basic anima-
tions a user would expect to see in an app are built right into
the framework, so you have to do very little as a developer
to create a well behaved app – nice! It’s a similar story with
the built-in controls such as the GridView and ListView, all
the standard animations are built-in so the designer can
concentrate their efforts not on re-implementing these
standard animations in every app, but instead on creating
beautiful experiences elsewhere in the app.
ANIMATIONS IN WINDOWS PHONE APPS
For Windows Phone app developers, unfortunately the
process is a little more manual. Certainly, built-in controls
such as the LongListSelector, Pivot and Panorama have
beautiful animations built-in and while the user is interact-
ing directly with those controls, it’s all beautiful and fluid,
as the user would expect. But as soon as you navigate away
from those controls, perhaps to another page, or if you
build an app out of simpler controls such as the standard
<Page>, then things go a bit awry. There’s quite a lot of the
animations you would expect to see – well – missing.
A great example of this is exhibited by the standard
Windows Phone Databound App new project template
in Visual Studio.
Create yourself one of these and run it: what is the user
experience like? It’s really not good enough. The main page
navigates nicely into view using the
pleasing turnstile page transition
navigation and then shows a list of
items, but when you tap on an entry
in the list, it navigates to another
page that shows the detail of the
selected item without applying
any navigations at all. It’s just ‘tap’
and – bang! – the target page sim-
ply replaces the source page. No
turnstyle animations as exhibited
by the built-in first party apps
such as the Contacts app, just a
straight ‘hide one page, show the
next’ experience – really not good
enough.
Adding Animations
as the final gloss for your
Windows Phone app
Appropriate use of animations are a key part of any successful Windows
app, whether on Windows 8.x or on Windows Phone. For Windows Phone
apps, there are six essential animations that every developer and designer
needs to know how to implement: two kinds of page transition animations,
the ‘tilt effect’ to apply to pressed items such as a button, and animations
to represent the loading or unloading of an item and the deletion of an
item. This article shows you how.
By Andy Wigley
48 49
private void InitializePhoneApplication()
{
if (phoneApplicationInitialized)
return;
// Create the frame but don't set it as RootVisual yet; this allows the…
// screen to remain active until the application is ready to render.
RootFrame = new TransitionFrame();
RootFrame.Navigated += CompleteInitializePhoneApplication;
3. Now, in every page where you want to support Page transitions (so, actually
that’s *every* page), add the xmlns:toolkit namespace declaration to the top
<Page…> element:
<phone:PhoneApplicationPage
x:Class="DataBoundAppWithPageTransitions.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:phone="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone"
xmlns:shell="clr-namespace:Microsoft.Phone.Shell;assembly=Microsoft.Phone"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:toolkit=
"clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Toolkit"
mc:Ignorable="d"
4. Now go to the code for the Windows Phone Toolkit sample app that you
downloaded before, and open Samples\NavigationTransitionSample1.
xaml in the PhoneToolkitSample.WP8 project. Copy the block of XAML
that defi nes the <toolkit:TransitionService.NavigationInTransition> and the
<toolkit:TransitionService.NavigationOutTransition> that is in there, after the
opening <Page> element:
<toolkit:TransitionService.NavigationInTransition>
<toolkit:NavigationInTransition>
<toolkit:NavigationInTransition.Backward>
<toolkit:TurnstileTransition Mode="BackwardIn"/>
</toolkit:NavigationInTransition.Backward>
<toolkit:NavigationInTransition.Forward>
<toolkit:TurnstileTransition Mode="ForwardIn"/>
</toolkit:NavigationInTransition.Forward>
</toolkit:NavigationInTransition>
</toolkit:TransitionService.NavigationInTransition>
<toolkit:TransitionService.NavigationOutTransition>
<toolkit:NavigationOutTransition>
<toolkit:NavigationOutTransition.Backward>
<toolkit:TurnstileTransition Mode="BackwardOut"/>
</toolkit:NavigationOutTransition.Backward>
<toolkit:NavigationOutTransition.Forward>
<toolkit:TurnstileTransition Mode="ForwardOut"/>
</toolkit:NavigationOutTransition.Forward>
</toolkit:NavigationOutTransition>
</toolkit:TransitionService.NavigationOutTransition>
5. Paste that XAML into each of your pages.
6. Run, and enjoy your beautiful page navigation transition animations!
IMPLEMENTING PAGE TRANSITION ANIMATIONS
Happily, this shortcoming is easily rectifi ed. For this, you
need to add a reference to the Windows Phone Toolkit
which is a collection of ‘extra’ bits and pieces provided
by the Windows Phone developer platform engineering
team. It is distributed on NuGet which is the package man-
ager for the Microsoft development platform.
To install the Windows Phone Toolkit, fi rst make sure you
have got the most up to date NuGet Client installed. To
check this, open Visual Studio and click on the Tools menu,
then click Extensions and Updates. In the Extensions and
Updates window, click on the Updates section shown in
the menu on the left hand side – if an update is listed for
the NuGet Package Manager for Visual Studio, click to
install it.
Back in Solution Explorer, right-click on your project fi le
and on the context menu, click Manage NuGet Packages…
which launches the NuGet client wizard. In the Search box,
type Windows Phone Toolkit to locate the correct pack-
age in the NuGet online library. When it fi nds it, click the
Install button to download the assembly and add it to your
project references.

OK, so what happens next? Well, at this point you need a
bit of insider knowledge (or an article such as this one!)
and with NuGet packages that can be a bit hard to fi nd
sometimes. There is no requirement for publishers of
NuGet packages to provide any detailed documentation
that is directly discoverable from Visual Studio or from
MSDN. In fact, there is no standard way of providing
documentation. NuGet package publishers are required
to provide a webpage of information about their package,
and a link to that called Project Information is displayed
in the NuGet Package Manager information pane, which
displays on the right when you select a package (as shown
in the image above).
If you click on this link for the Windows Phone Toolkit, it
takes you to http://phone.codeplex.com which is where
you can fi nd out more about the toolkit and indeed, down-
load the entire source code for the package. Documenta-
tion is still a little bit hard to find on the project webpage,
and in fact the ‘documentation’ for the Windows Phone
Toolkit is provided in the form of a sample app that is
included in the source code download for this project. So
download it, build it in Visual Studio 2012 and run the
sample app and examine the source code to fi gure out
how to use the different features offered within it.
Although all the answers are
revealed by studying the sample
app, it still takes some time to
figure out which animation you
want and how to apply it – which is
where articles like this one comes
in. We’ve fi gured it out, so you don’t
have to!
The one you want for most page
transitions is the Turnstile. Turn-
stile Forward In needs to be
applied to the page we are navigat-
ing to, and Turnstile Forward Out
needs to be applied to the page we
are navigating from. When the user
presses the Back button, then the
animations are subtly different:
Turnstile Backward Out on the
page we are leaving, and Turnstile
Backward In on the page we are
navigating back to. Sounds complicated? Fortunately,
it’s easy to implement.
1. First open your App.Xaml.cs fi le in the code editor and
scroll down to the bottom to the collapsed region called
Phone application initialization. Expand this region and
locate the InitializePhoneApplication method and the
sixth line where it sets RootFrame to be a new instance
of PhoneApplicationFrame.
2. Change this to a TransitionFrame, which is a special-
ised version of PhoneApplicationFrame that comes in
the Windows Phone Toolkit and which adds in support
for page transition animations:
50 51
To turn this on across the page, simply declare the toolkit namespace as we did
with page animations. Then turn on tilteffect by setting the following attribute
in the <Page> element XAML tag:
<phone:PhoneApplicationPage

xmlns:toolkit=
"clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Toolkit"

shell:SystemTray.IsVisible="True"
toolkit:TiltEffect.IsTiltEnabled="True">
There’s much more you can do with Tilt Effect such as apply it on a per control
basis rather than across the page, but for basic usage that’s all you need to know.
BUT I WANT A 5 STAR APP…
Many developers will leave it at that. Page turnstile transition animations and
tilt effect are good additions, but they really have to be considered the bare
minimum acceptable. And if you want a 5 star app in reviews, then you need to
go further – to ‘sweat the details’ as we say.
Feathered Turnstile Animation
This is a variant of the basic Turnstile page animation. Instead of simply ‘fold-
ing’ the page in or out rotating around the vertical edge of the screen, this one
gives more of a ‘page turn’ visual effect. As mentioned before, you should use
this when navigating from a list of items and the destination page is a drill-down
detail page for an item selected on the source page.
This is also included in the Toolkit – see the featheredtransitions sample in the
sample app. To implement this, implement the Turnstiles animation in the same
way as described before, but there are some subtle changes in the XAML you
use to declare the animations on the page with the List control, and the addition
of a FeatheringIndex attribute on the principal controls displayed on the page:

The image above shows the turnstile effect as applied to the Databound App project.
In fact, this is *not* the ideal animation to use in this case. You should use a ‘regular’
turnstile animation such as this when navigating from a page that displays a ‘menu’
of options so that the destination page signifi es a ‘new task’. When the source page
is a list and you are navigating to another page that is a drill-down detail, then the
appropriate animation is the feathered turnstile (see later in this article).
FURTHER ESSENTIAL ANIMATIONS
Page transition animations can be considered the bare minimum acceptable, but those
alone are nowhere near enough. There is another animation included in the Toolkit
which is extremely easy to apply. This is called Tilt Effect and is a subtle 3D animation
that provides the effect of movement into the screen when the user taps on a button
or a list item or other controls that inherit from button.
Tilteffect is one of those subtle effects that the user may not even notice, but some-
how increases their pleasure while interacting with your app. In this frame capture,
the runtime three list item is pressed, causing it to appear to depress into the screen.
52 53
<Grid x:Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0">
<phone:LongListSelector x:Name="MainLongListSelector" Margin="0,0,-12,0"
ItemsSource="{Binding Items}"
SelectionChanged="MainLongListSelector_SelectionChanged"
toolkit:TiltEffect.IsTiltEnabled="True"
toolkit:TurnstileFeatherEffect.FeatheringIndex="2">
<phone:LongListSelector.ItemTemplate>
<DataTemplate>
<StackPanel Margin="0,0,0,17">
<TextBlock Text="{Binding LineOne}" TextWrapping="Wrap"
Style="{StaticResource PhoneTextExtraLargeStyle}"/>
<TextBlock Text="{Binding LineTwo}" TextWrapping="Wrap"
Margin="12,-6,12,0"
Style="{StaticResource PhoneTextSubtleStyle}"/>
</StackPanel>
</DataTemplate>
</phone:LongListSelector.ItemTemplate>
</phone:LongListSelector>
</Grid>
</Grid>
</phone:PhoneApplicationPage>
As you can see above in the highlighted XAML, the page transitions are defi ned using the Turn-
stileFeatherTransition, and you must also set the TurnstileFeatherEffect.FeatheringIndex
attached property on the controls to defi ne the order in which the page should animate the
controls as the page transition takes place. So in the XAML shown above, the two TextBlock
controls in the header StackPanel animate fi rst (FeatheringIndex ‘0’ and ‘1’), while the Long-
ListSelector control containing the list animates last:
<phone:LongListSelector x:Name="MainLongListSelector" Margin="0,0,-12,0"
...
toolkit:TurnstileFeatherEffect.FeatheringIndex="2">
The result is very pleasing:

Notice that the target page, the drill-down detail page, just uses a regular turnstile animation,
as before.
ITEM LOADING, UNLOADING AND DELETION ANIMATIONS
There are many other animations a skilled designer can employ to convey meaning and emphasis
to the user experience, but the three others that I consider to be essential are when your app
loads an item and when it deletes an item. These animations are taken from quite an old blog
post by my Microsoft colleague, Jerry Nixon, who is a Technical Evangelist based in Colorado.
A few years ago, back when we could still call our design language ‘metro’, he wrote an excellent
series of blog posts on must-have animations for Windows Phone apps: http://blog.jerrynixon.
com/2012/01/mango-sample-5-must-have-animations_4508.html . In it he describes fi ve ani-
mations, the implementation of two of which, the turnstile and ‘Select Item’ (a.k.a. tilt effect)
have been superseded by the capabilities in the Windows Phone Toolkit as already described
<phone:PhoneApplicationPage
x:Class="DataBoundAppWithPageTransitions.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:phone="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone"
xmlns:shell="clr-namespace:Microsoft.Phone.Shell;assembly=Microsoft.Phone"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:toolkit=
"clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Toolkit"
mc:Ignorable="d"
d:DataContext="{d:DesignData SampleData/MainViewModelSampleData.xaml}"
FontFamily="{StaticResource PhoneFontFamilyNormal}"
FontSize="{StaticResource PhoneFontSizeNormal}"
Foreground="{StaticResource PhoneForegroundBrush}"
SupportedOrientations="Portrait" Orientation="Portrait"
shell:SystemTray.IsVisible="True"
toolkit:TiltEffect.IsTiltEnabled="True">
<!--Transitions-->
<toolkit:TransitionService.NavigationInTransition>
<toolkit:NavigationInTransition>
<toolkit:NavigationInTransition.Backward>
<toolkit:TurnstileFeatherTransition Mode="BackwardIn"/>
</toolkit:NavigationInTransition.Backward>
<toolkit:NavigationInTransition.Forward>
<toolkit:TurnstileFeatherTransition Mode="ForwardIn"/>
</toolkit:NavigationInTransition.Forward>
</toolkit:NavigationInTransition>
</toolkit:TransitionService.NavigationInTransition>
<toolkit:TransitionService.NavigationOutTransition>
<toolkit:NavigationOutTransition>
<toolkit:NavigationOutTransition.Backward>
<toolkit:TurnstileFeatherTransition Mode="BackwardOut"/>
</toolkit:NavigationOutTransition.Backward>
<toolkit:NavigationOutTransition.Forward>
<toolkit:TurnstileFeatherTransition Mode="ForwardOut"/>
</toolkit:NavigationOutTransition.Forward>
</
toolkit:NavigationOutTransition>
</toolkit:TransitionService.NavigationOutTransition>

<Grid x:Name="LayoutRoot" Background="Transparent">
<Grid.RowDefi nitions>
<RowDefi nition Height="Auto"/>
<RowDefi nition Height="*"/>
</Grid.RowDefi nitions>
<!--TitlePanel contains the name of the application and page title-->
<StackPanel Grid.Row="0" Margin="12,17,0,28">
<TextBlock Text="MY APPLICATION"
Style="{StaticResource PhoneTextNormalStyle}"
toolkit:TurnstileFeatherEffect.FeatheringIndex="0"/>
<TextBlock Text="page name" Margin="9,-7,0,0"
Style="{StaticResource PhoneTextTitle1Style}"
toolkit:TurnstileFeatherEffect.FeatheringIndex="1"/>
</StackPanel>
54
in this article. The other three, ‘Load item’, ‘Unload item’
and ‘Delete Item’ that he describes are still part of my
essential tools though.
As Jerry describes it so well, the Load Item animation
is “so simple and so pleasant. Instead of popping a new
page’s content, content is gently raised and revealed. …
The whole storyboard duration is .75 seconds. It gives the
user that the content is “being brought in”.”
The Unload Item animation is simply the reverse of this.
The Delete Item animation, Jerry describes as “dramatic,
but communicative and fast. It is also consistent. We see
this animation used in Email when a message is deleted.
Without a doubt, this is my favorite of them all – because
it is so visually communicative.”
As to how to implement these, there is no point in me
simply repeating what Jerry has described so well, so go
over and read how to implement them at his blog: http://
blog.jerrynixon.com/2012/01/mango-sample-5-must-
have-animations.html . The animations are implemented
in code instead of XAML but that doesn’t make them any
less appropriate. The only slight disadvantage of the
Load/Unload animations is that they require you to set
the initial state of your LayoutRoot element in XAML to
Visibility.Collapsed which can be a slight annoyance when
you want to work on the page design in the Visual Studio
designer – you just have to remember to toggle the vis-
ibility before and after each design session. The results
for the user experience make it worth it though!
For information about partnership deals, please contact:
info@ndc-london.com
tel.: +44 2036086731
Thanks to our partners that make NDC London happen!
Become a partner today!
Andy Wigley works as a Technical
Evangelist for Microsoft UK,
working to support the developer
community to create mobile apps
on Microsoft platforms. Andy is
well-known for the popular
Windows Phone JumpStart videos
which are available on http://
channel9.msdn.com. He has
written a number of books for
developers published by
Microsoft Press and is a regular
speaker at major conferences
such as Tech Ed.
56 57
STATIC TYPING
The current version of JavaScript ECMAScript 5 and
the future version ECMAScript 6 are not and will not be
statically typed. There are sometimes advantages to this,
however, with large code bases lack of Type safety can
often cause problems.

Take for example the following:

var number1 = 10;
var number2 = 15;
number2 = number1 + 'some string';

It's perfectly valid JavaScript. However, it's unlikely that
the developer intended to create an integer and then save
over it with a string. Type safety would save us from this
problem since it will not allow you to implicitly cast any
object type to another. In TypeScript our code above
might look more like this:

var number1 = 10;
var number2: number;
// TypeScript alerts us there is a problem!
number2 = number1 + 'some string';

In the example above I'm explicitly setting number2 as
a number.

Types can also be used when we define parameters of a
function. The getArray function, below, accepts a param-
eter x and types it as a string array. So if we try to pass
in a number or a string, TypeScript will alert us right at
development time in the editor.

function getArray(x: string[]) {
var len = x[0].length;
}

Adding types changes none of the emitted JavaScript,
once it's been compiled it will look like plain old Java-
Script.

function getArray(x) {
var len = x[0].length;
}

Behold the beauty of TypeScript: it helps you write less
error prone JavaScript at design time, but you still get
plain old school JavaScript as a result, so your code will
work everywhere JavaScript works.

CODE COMPLETION AND REFACTORING
Now because TypeScript is Type-safe (the hints in the
name) the tools we use can help us out a bit more with
our code. Now traditionally if an IDE wants to give you
code completion it needs to understand the object type
that you are working with, IDE's and text editors use
rather sophisticated static code analysis to do this on
vanilla JavaScript but there are limits as to how much type
information can be inferred and therefore limits to the
capabilities of the code analysis approach.

In the 1st example I showed, static code analysis would be
able to figure out the types, but as functions and objects
become more complex, static code analysis will become
less accurate: it's making educated guesses.

Because TypeScript is Type-safe, an IDE knows for cer-
tain what object types are and so can provide full code
hinting and refactoring tools like you would expect to see
in languages such as C
#
or Java.
CLASSES
When you want to organize your code into logical units,
classes make a lot of sense. The JavaScript language,
however, doesn't support them out of the box. With
some clever syntactical sugar we can make JavaScript
look like it has class but the syntax is slightly messy. Here,
for example, is how you could create a class traditionally
in JavaScript:

var Greeter = (function () {
function Greeter(message) {
this.greeting = message;
}
Greeter.prototype.greet = function () {
return "Hello, " + this.greeting;
};
return Greeter;
})();

var greeter = new Greeter("world");

By Richard Astbury
Getting Serious with

JavaScript
TypeScript is a new language that makes writing JavaScript applications 89% less
ridiculous (This statistic is 99.9% plucked out of the air). If you have a large Java-
Script project you'll know that it can become unwieldy really fast. There is nothing
elegant about the Syntactical sugar we apply to JavaScript to make it behave like
it were an Object Orientated Language, TypeScript changes that, we write our code
in TypeScript and when we compile it to get 100% pure JavaScript as an output.
By Martin Beeby
58
This is a very simple class, however, it's already very com-
plicated to read. It's also very diffi cult to remember the
syntax and far too easy to make a mistake.

Declaring a class in TypeScript looks very similar to the
proposals for class definitions in ECMAScript 6 (the
next version of JavaScript). That's not a coincidence, but
rather a design goal of TypeScript. TypeScript compli-
ments JavaScript and when ECMAScript 6 is released and
prevalent your TypeScript code will look very similar. In
fact we may not need TypeScript at that stage.

Here is that same class written in TypeScript:

class Greeter {
greeting: string;
constructor(message: string) {
this.greeting = message;
}
greet() {
return "Hello, " + this.greeting;
}
}

var greeter = new Greeter("world");

As you can see it has more structure, is clearer and more
readable. The IDE is also able to understand that this is a
class and so you can refactor property names safe in the
knowledge it will be updated wherever the class is being
used in the project.

MODULES
Looking at a JavaScript application that you didn't write
or perhaps one you wrote a long time ago can be daunting.
Especially so when the code is poorly organised. Modules
allow you to group together code in a sensible way and
reuse it throughout your project. Patterns such as the
Revealing Module Pattern in JavaScript achieve this goal,
however, TypeScript modules become even easier with
the module keyword again this keyword has been bor-
rowed from the ECMAScript 6 specifi cation. Construct-
ing a module looks like this:

module Sayings {
export class Greeter {
greeting: string;
constructor(message: string) {
this.greeting = message;
}
greet() {
return "Hello, " + this.greeting;
}
}
}
var greeter = new Sayings.Greeter("world");

You will notice that we used the export Keyword on line
2 this allows us to mark internal classes inside a module
and Make them public so that people using the Module
will have access to them.
As I come from a C
#
background I fi nd it helpful to think of
Modules as Namespaces. It allows you to group together
common classes in a logical way and like namespaces you
can use them in numerous projects.

INHERITANCE
If you thought writing a class in JavaScript was like syn-
tactical Gymnastics, then doing inheritance is the Olympic
standard. This is where the power of TypeScript really
starts to shine as the syntax is beautifully simple.

class Animal {
constructor(public name: string) { }
move(meters: number) {
alert(this.name + " moved " + meters +
"m.");
}
}

class Horse extends Animal {
constructor(name: string) { super(name); }
move() {
alert("Galloping...");
super.move(45);
}
}

You simply use the keyword extends and then in your
class you can access the inherited classes properties
and functions.

TYPESCRIPT IS YOUR FRIEND
If you are familiar with Object Oriented Programming
and want to impose some structure on your JavaScript
applications then TypeScript is probably the best option
out there. It will help you write robust , readable code
and will encourage you to refactor and improve your code
without the fear that you are going to accidently break
something. There is so much more I haven't covered in
this article including Generics and Interfaces but the good
news is there is a fantastic community already built up
around TypeScript and if you want to get deeper into the
subject then the answers are only a search engine away.
Martin Beeby works for Microsoft
where he talks to developers about
HTML5, Windows 8 and the web.
Martin has been developing since he
was 16 and over the past 15 years
has worked on projects with many
Major brands in the UK.
￿ e core values of Tinamous are:
*
Supporting private account areas so your data can only be seen by the people you have
chosen to join your account.
*
An open and easy to use API for connecting devices and people
*
A simple and familiar user interface design that shows the important
status posts and provides easy access to device measurements and alerts

￿ e Tinamous API supports reading and writing of status posts, measurements and alerts
as well as account and user management.

To ￿ nd out more and sign up for your free account,
visit http://Tinamous.com
Share with
just your friends
Tinamous is a private social network bringing together
people and the Internet of ￿ ings
60 61
There is good support for LevelDB in Node.js, thanks to
the work of Rod Vagg, author of several modules to make
it really easy to use the database.
Like Node, Level has a small core written in C++ which
offers basic functionality and good performance. This
core is surrounded with a rich ecosystem of JavaScript
modules which extend the capabilities of the database.
Using Node and Level together makes it possible to build
your own database system, by plugging together the
modules you need, to get the functionality your system
demands. Rather than the traditional approach of select-
ing a complete database system, with the design deci-
sions and trade-offs that come with it, you make your own
choices around replication, consistency and functionality.

BASICS OF LEVELDB IN NODE
The easiest way to get started, is with the ‘level’ package
on the NPM registry. This combines ‘leveldown’, a low level
binding to the LevelDB code, and ‘levelup’, a higher level
and more idiomatic JavaScript API.
To get started, install the package using NPM on the com-
mand prompt:
> npm install level
In your node application, you can now get a reference to
the package using require:
var level = require(‘level’);
The data is stored in a number of fi les, all held in a single
directory. To create an instance of a database, pass the
directory name into the level function (‘database’ in this
case):
var db = level(‘database’);
That’s the database set up, you’re ready to start using it.
Level supports fi ve simple operations, these are:
Put - write a key-value pair to the database:
db.put(‘key’, ‘value’, function(err){
if (err) console.log(‘there was an error’);
});
Get - reads a value from the database:
db.get(‘key’, function(err, value){
if (err) console.log(‘there was an error’);
console.log(value);
});
Del – delete a key-value pair from the database.
db.del(‘key’, function(err){
if (err) console.log(‘there was an error’);
});
They’re the simple operations, but next comes the real
power of level.
CreateReadStream - The keys are sorted alphabetically,
and you can read a range of keys out using a stream:
var stream = db.createReadStream({start:’a’, end:’a~’});
stream.on(‘data’, function(data){
console.log(data.key + ‘=’ + data.value);
});
stream.on(‘end’, function(){
console.log(‘done!’);
});
The code snippet above will return all key-value pairs
that have a key starting with ‘a’. The ‘~’ character is the
last Unicode character on the keyboard, and ensures that
everything from ‘aa’ to ‘az’ will be returned by the stream.
Batch - A batch function allows you to perform a number
of put/del operations atomically:
var operations = {
{ type: ‘put’, key: ‘foo’, value: ‘bar’ },
{ type: ‘del’, key: ‘baz’ }
}
db.batch(operations, function(err){
if (err) console.log(‘there was an error’);
});
That’s the basics so you’re ready to start storing data
internally in your application, the fun however, is only just
beginning.
By Richard Astbury
61
Modular databases with
LevelDB and Node.js
LevelDB is a sorted key-value store written by Google fellows Jeffrey
Dean and Sanjay Ghemawat. You've probably already used it, as it under-
pins the IndexedDB features of the Chrome web browser. Like SQLite it’s
an in-process database, so your application hosts the database itself,
and calls the database API directly. LevelDB stores its data as fi les on
the disk with an in-memory cache.
By Richard Astbury
62
NEXT STEPS
The obvious next step is to enable other applications, or
parts of your application to start using this database.
Let’s create a basic web server using the popular ‘express’
framework to allow clients to connect using HTTP:
Use NPM again on the command prompt to install express:
> npm install express
You can now create a web server in node, and provide a
couple of routes for getting and putting key-values.
We now have a web server running on port 8080, which
supports get and put operations. I’ll leave you to fi ll in
the blanks for the rest of the operations, in fact you can
design any interface you like. You could support web-
sockets, add extra indexes, support JSON documents,
whatever you need.
More Modules
A large module ecosystem has built up around level. The
list is far too long for this article but to give you a fl a-
vour of some of the possibilities out there, here are a few
examples:
Level-sublevel – makes it easy to partition your keys into
sub-ranges, and hold different data types in the same
database.
Level-master – master slave replication, allowing you to
create a cluster of databases in various confi gurations.
Levelgraph – a module for storing subject, predicate
object triples.
Multilevel – exposes level API remotely over a network.
Level-geospatial – Stores key values against a geospatial
index using latitude and longitude coordinates.
A more complete list is available here: https://github.
com/rvagg/node-levelup/wiki/Modules
Go ahead and try some of these modules out, write your
own or just play with Level directly. Playing with a data-
base like this gives you an appreciation for the design
decisions people take when they create database sys-
tems, and enables you to make those decisions your own.
Richard Astbury is a senior consultant at two
10degrees, where he helps software businesses
around Europe migrate their applications to the
cloud. Richard is a Microsoft MVP for Windows
Azure, and is speaking at NDC London on actor
based programming with Orleans. He is often found
developing open source software in C
#
and Node.js,
and lives in rural Suffolk, UK with his wife and two
children.
var app = require('express')();
var db = require('level')('database');
app.get('/:key', function(req, res){
db.get(req.params.key, funciton(err, data){
res.json(data);
});
});
app.put('/:key/:value', function(req, res){
db.put(req.params.key, req.params.value, function(err){
res.json({});
});
});
app.listen(8080);
65
64
In today’s fast-paced, volatile, high stakes software
development world we need to employ all tools available
at our disposal to maximise the chances of success. I’d like
to draw your attention to one such tool, particularly dear
to the creators of XP (eXtreme Programming), namely
feedback. It’s most likely already a cornerstone or your
software development approach and I believe it may war-
rant a bit more attention to get the most out of it.
WHAT IS FEEDBACK
What better way to introduce the concept of systemic
feedback than with a non-inverting amplifi er circuit?

[Source: Wikipedia]
For this is exactly where the word and the concept of
feedback originate. In their quest for stable, reliable
and repeatable amplifi cation engineers discovered that
feeding part of the output signal back into the system
yields signifi cant improvements. This principle has since
been applied to many amplifi ers and control systems and
permeated through to cybernetics and eventually also
into software development.
We can think of feedback as “the modifi cation or control
of a process or system by its results or effects” or simply
as an effective tool for learning and improvements. Writ-
ing software is not a manufacturing process but much
more a design process and as such primarily a learning
activity. Kent Beck suggests that if there is any meas-
ure of an effective day for a developer it’s the number of
completed learning cycles.
In the case of a circuit board the feedback signal simply
gets connected and the system benefi ts from the closed
loop. In human systems and in design of IT systems we
have to explicitly create these connections and take steps
to ensure that effective learning occurs. There are many
models out there to suggest how this can be done like the
Deming-Shewhart PDCA (Plan, Do, Check, Act) cycle or
Boyd’s OODA (Observe, Orient, Decide, Act) loop. These
By Marcin Floryan
65
Feedback

for software development

“No fi xed direction remains valid for long; whether we are talking about
the details of software development, the requirements of the system, or
the architecture of the system. Directions set in advance of experience
have an especially short half-life. Change is inevitable, but change creates
the need for feedback.”
Extreme Programming Explained
66 67
Integration
Writing features that pass unit tests is a good start but we
also need to know that our code works well in its wider eco-
system. This is where the integration feedback loop shines.
We put all the assemblies together and deploy them to a
machine where we can run higher level tests, ideally auto-
mated. The more frequently we can do this the better we
learn if what we wrote didn’t break the system. That’s why
many teams apply the practice of CI (Continuous Integra-
tion) and build, run and test their entire application stack
every time a piece of code is changed. Remember to make
the most of this feedback loop and include all dependen-
cies in your integration tests. If some external dependen-
cies are too difficult to continually test against have an
environment where this can still be done on a regular basis.
Acceptance testing
One of the crucial skills when writing software is know-
ing when to stop. Having a feedback loop at the accept-
ance level provides exactly this information. This can be
done using acceptance criteria written on a story card
or automated acceptance tests running as part of our CI
pipeline. You can make improvements in this feedback
loop by considering ATDD (Acceptance Test-Driver Devel-
opment) or Specification by Example; both topics worth
investigating.
Demonstrated
I was in a presentation the other week and heard of
one development team who put up a banner remind-
ing them “you are not your users”. I think most devel-
opment teams would benefit from such a reminder
(unless you work on a development tool for developers).
This is why it’s so valuable to demonstrate your com-
pleted features or stories to your real users and get
direct feedback if they meet their expectations. After
all “Nothing speaks more clearly to stakeholders than
working, usable software.” as James Shore reminds us.
Production readiness
Naturally our software is just useless inventory until we
put it in front of users in our production systems so it’s
worth knowing if the release of a new piece of function-
ality will work in production. That’s why a production
readiness feedback loop can help us validate that that’s
really the case. A practice of Continuous Delivery [http://
continuousdelivery.com/] is how can shorten and use this
feedback loop effectively.
Monitoring
Finally our code makes it into production and is subjected
to real traffic. From patterns of user behaviour, load and
throughput profiles to memory and CPU usage statistics
there is an abundance of information which, when taken
back into the development of new parts of the system,
can help us significantly improve it. It’s worth exploring
the full potential of this feedback loop.
Concept to cash
If all of the above feedback loops feel like a burden that
your small start-up simply cannot support, this one, final,
all encompassing loop is probably your primary tool to
learn about your product. If you can indeed go through
the full cycle more quickly that the competition you’re
increasing your chances of success. But remaining solely
on this path requires discipline and speed and accept-
ance that you will be making discoveries (and mistakes)
you could’ve caught earlier. On the other hand, large
organisations, with more elaborate inner feedback loops,
tend to miss out on this most crucial cycle. How often
do you measure the real value delivered by new features
released to production? After all, “only code that you can
actually release to customers can provide real feedback
on how well you’re providing value to your customers” .
MAKE THE MOST OF IT
I hope these thoughts have encouraged you to look at
systemic feedback as a practical tool that you can use
to your advantage to improve the success of your soft-
ware development efforts. Remember however that
using feedback effectively requires deliberate action
and repetition – incremental and iterative development.
And if you need more inspiration to make feedback work
consider this final thought from Kent Beck: “Optimism
is an occupational hazard of programming, feedback is
the treatment.“
have made it explicitly into software development, the
former into the Scrum Framework and the latter into The
Lean Startup approach. You can distil this into a simple
five-step formula: make your assumptions explicit, set
a clear objective, create a careful design, learn from
results, rinse and repeat.
FEEDBACK LOOPS IN SOFTWARE DEVELOPMENT
A typical software development process may be full of
feedback loops, some of them obvious and evident, some
implicitly embedded in our approach. There are a few
common ones, which you will no doubt recognise so let’s
briefly consider how feedback works for each of them.
Exploration
Even before we write a single line of code our ideas can
be validated in a quick and easy way. Building lo-fi proto-
types or storyboards and presenting them to a sample
of potential customers or a product owner can provide
valuable insights that can be translated into tangible
improvements of the idea. On the technical level we can
walk through whiteboard modelling sessions and reason
about merits of a proposed design. This can and should
be done even for small pieces of functionality, maybe a
story or a feature, as at this stage the cost of change is
at its lowest. Don’t deprive yourself of this opportunity
for feedback.
Coding
Long gone are the times when we had to wait days to find
out if our code, punched in on a paper card, had the right
syntax or would even run, never mind the logical correct-
ness. With modern IDEs this feedback loop shrank to sec-
onds. It’s possibly the area of some of the biggest improve-
ments in programming and there is more to come. On the
.NET platform, for example, the Roslyn project [http://
msdn.microsoft.com/en-gb/roslyn] promises to bring
direct, real-time access to the internals of the compiler
for better information about the code we’re typing in. Tools
like ReSharper [http://www.jetbrains.com/resharper/],
CodeRush [https://www.devexpress.com/Products/
CodeRush/] or JustCode [http://www.telerik.com/prod-
ucts/justcode.aspx] are also very helpful in shortening
and improving the value of this feedback loop.
Unit testing
Once the code correctly follows the rules of the lan-
guage we want to make sure that it actually does what
we intended it to do. While the “run it to test it” approach
might have worked in the past for most software solutions
this is no longer a viable approach. That’s why we write
Unit Tests (we do, right?), which provide specific feedback
on the particular bit of code we’re writing in isolation. The
value of this loop can be improved if we decide to follow
TDD (Test Driven Development) and write our tests before
production code. Here too we can reach for tools that may
streamline the process. The ruby community commonly
uses tools like guard [https://github.com/guard/guard],
autotest [https://github.com/grosser/autotest] or watchr
[https://github.com/mynyml/watchr] and in the .Net space
we can now use Mighty Moose [http://continuoustests.
com/index.html] to have continuous feedback about the
state of our unit tests as we code.
Marcin Floryan is a Lead Software Engineer at
comparethemarket.com where he helps build
scalable distributed systems applying DDD and
CQRS. He continually learns to develop better
software and to develop software better, to help
others do it and to share that knowledge. He enjoys
working with people and working with code, always
striving to see the whole. He embraces the values of
eXtreme programming and aims to delight custom-
ers by delivering value. Marcin likes to share his
passion and enthusiasm by speaking publicly to
communities small and large and occasionally
scribbling some thoughts at http://marcin.floryan.pl/
You can find him on twitter as @mfloryan
68 69
So, what is it and how does it work?
Assume you have a typical web appli-
cation, and you’ve layered the design
nicely – with the web layer talking to
a service layer, which in turn talks to
other layers below - maybe a domain
layer, and a data layer.
Typically, when a user performs an
action from the website, the UI layer
makes a call to the service layer and
then waits for it to complete the
action. If this takes a long time to
complete, there are several things
that happen – unresponsive UI, wait-
ing threads, and above all, impatient
and unhappy users. Perhaps a more
desirable scenario might be that
when the user performed an action
from the website, the UI layer issued
a command to the service layer and
returned immediately, allowing the
service layer to process the com-
mand asynchronously at its own
sweet pace.
The Queue Centric Workflow pat-
tern can be used to do this kind of
asynchronous decoupling. When the
UI makes a call to the service layer
to perform a long running opera-
tion, the service layer posts a mes-
sage to a queue with the request and
returns straight away. Other threads
or worker roles can then monitor the
queue and process the posted items.
This approach has a number of advan-
tages:
• If the worker role is slow (or has
been killed), the UI is still respon-
sive, as all it does is add items to
the queue. This decouples the UI
and the processing completely. The
application continues to run for as
long as the queue is available.
• If the queue is getting filled up fast,
you can spin up more instances of
the worker role to do more process-
ing, scaling out only those queues
that are being backed up.
• Azure allows auto-scaling based on
queue length, which makes it easier
to have a dynamically responsive
system which reacts appropriately
to transient load patterns.
Queue Centric Workflow (QCW) is
especially useful when complement-
ing the Command Query Repository
Segregation (CQRS) pattern. The
CQRS pattern separates the Read
(Query) operations provided by a
system from the Write (Command)
operations, and allows for the writes
and reads to operate on separate,
optimally chosen, data stores which
can be kept consistent. QCW helps
to process the commands asynchro-
nously, and to keep the read and write
stores consistent.

To make an implementation of QCW
feasible, you need a reliable queuing
mechanism like the one Windows
Azure provides.
Azure Queues are what are known as
at-least-once-delivery queues.
Whenever people talk about design and architecture, they talk about
two key words – coupling and cohesion. For a good architecture, you
need to have loose coupling and high cohesion. One pattern for loose
coupling that is becoming more and more common, particularly when
you are writing cloud applications, is Queue Centric Workflow.
By John Azariah and Mahesh Krishnan
Queue Centric Workflow
and Windows Azure
MONEY LEAK
When you are polling the queue continuously you need to
be aware that each call counts towards your bill. 1 cent for
100,000 transactions may not sound like much, but if you
have a dozen worker roles querying the queue 100 times
a second even while messages are not present, this can
quickly add up. Typically, an exponential back-off strat-
egy on an empty queue is a good idea.
SERVICE BUS QUEUES
Do you want to use Service Bus
Queues instead of Windows
Azure Queues? No problem.
Query Centric Workflows can
be implemented with that too…
QCW
70
Once a message is posted to an Azure
Queue, it stays there forever until
deleted. A worker role may poll the
queue for messages and dequeue the
next one for processing. When a mes-
sage has been dequeued, it becomes
invisible to any other worker roles
polling the queue. If the message
has been successfully processed,
the worker role must delete the
message from the queue – typically
within a few seconds of dequeueing
it. If a message hasn’t been deleted
because, for example, the worker
role died in the midst of processing
it, then the queue makes the message
available for polling again.
PRACTICAL CONSIDERATIONS
Designing your commands: When
designing a QCW for your applica-
tion, it is important to chunk the
work up appropriately. Making the
units of work as small as possible is
a good way to go, because it makes it
easier to make the command idempo-
tent. However, the unit of work must
include all the operations that need
to be transactionally consistent –
so finding that fine balance between
getting as much work done atomically
while keeping the operation idempo-
tent will take the most time in your
design.
Why is Idempotency important?
Imagine that the command being
executed represented a bank trans-
fer from one account to another, and
that something catastrophic hap-
pened after money was deducted
from one account but before it was
added into the other account. When
the queue replays the command,
the next worker role that picked up
the command would then deduct
the money from the source account
again! This is obviously undesir-
able! To prevent this from happen-
ing, we need to model the command
as effecting the operations in such
a way that replaying the command
multiple times would have exactly
the same effect as playing it once.
Poison messages: There is a pos-
sibility that a bug in execution of a
command caused the worker to die
before deleting the command from
the queue. When this happens, the
queue will replay the command to
another role, which will also die. If
you get a few of these poisoned
messages in your queue, your entire
workforce may spend all its time
choking on these messages and
cycling up and down. To prevent this
from happening, a good strategy is
to keep track of the number of times
a command has been dequeued, and
remove the message from the queue
once it is apparent that the repeated
failures stem from the command, and
not from a transient infrastructure
failure.
Windows Azure queues support
this by providing a Dequeue Count
for each message, allowing you to
classify poison messages and react
appropriately. A good strategy for
poison messages is to queue them up
into a “Dead Letter” queue and have
a mechanism for recognizing trends
and problems in your design.
TL;DR
In summary, QCW or Queue Centric
Workflow is a pattern that allows you
to isolate units of work and execute
them asynchronously using queues.
Commands are queued up in a reli-
able queue, and worker roles poll the
queues to process the commands.
This allows the various layers in the
system to continue without blocking
on long running commands, decou-
ples those layers more effectively,
and allows for those layers to scale
independently and run even when
others are down.
Mahesh Krishnan works as a Principal
Consultant at Readify in Melbourne,
Australia. He is also a Windows Azure
MVP, and has written a couple of books.
He is active in the local .NET community,
and also runs the conference Developer
Developer Developer Melbourne. He is a
regular speaker at conferences such as
Tech Ed.
John Azariah is a senior Architect at
MYOB in Australia and has been working
on one of the largest SQL Azure projects
in the world. He is an alumnus of
companies such as Oracle and Microsoft
and has worked on products like Excel,
Project and Sharepoint. He is passionate
about technology, and is active in the .NET
community in Melbourne and presents
frequently in conferences such as Tech Ed.
Mahesh and John
will be talking about Queue
Centric Workflows and
other patterns in their talk
at NDC London
72 73






The software development industry has grown to accept
architecture and system design as a required part of every
software effort. Even approaches that formally eschewed
any amount of initial design now acknowledge the need to
invest up-front in architecture. There are proven method-
ologies of accomplishing architecture even in the face of
unknown and shifting requirements. We learned how to
encapsulate change and design a robust system, without
knowing the last detail of the design or the code itself.
Much as you to design the software system, you must
design the project to build the system: from accurately
calculating the planned duration and cost, to devising
several good execution options, scheduling resources,
and even validating your plan, to ensure it is sensible and
feasible, and that your team can deliver with acceptable
risk, on the schedule and budget.
IT'S ABOUT TIME
As software engineering is coming of age, there is a natural
convergence with other more traditional engineering disci-
plines. Some 20 years ago software architects were practi-
cally non-existing, yet today they are commonplace. But
software architects are nothing more than the traditional
system engineer in other domains. Even Agile as a meth-
odology has strong correlation to lean manufacturing and
even TQM. It should not be a surprise to see project design
making a début in the software world - after all, the other
engineering disciplines have been designing their projects
for decades. The techniques of software project design
are specifi c to software projects, but in the abstract there
is nothing new - these are the same ideas that engineers
have been using in classic engineering projects since the
1950's. To use an analogy, if I ask you to design a system
that is maintainable, reusable, extensible, secure (or safe),
and of high quality, you cannot tell if I am taking about
a mechanical system or a software system. In fact, the
approach, the Zen of achieving these goals is the same in
software as it is in traditional engineering.
Much the same way, if I ask you to design a project that will
comply with a set budget and deadline, within acceptable
risk, be traceable and manageable, you cannot tell if I am
talking about a bridge or an ERP system. And not surpris-
ingly, in the abstract, the techniques for designing both
types of projects are identical.
COMMON MISCONCEPTIONS
There are several misconceptions regarding project
design. The fi rst is that it is part of project management.
In fact, project design is not project management. The
correct relationship between project design and project
management is what architecture is to programming.
Furthermore, devising a good project design stems
from your system architecture and doing it properly is
a hard-core engineering task – designing the project. This
design task requires both the project manager and the
architect to work closely together to determine the best
overall plan.
There are several misconceptions regarding project design. The fi rst is that it
is part of project management. In fact, project design is not project manage-
ment. The correct relationship between project design and project management
is what architecture is to programming. Furthermore, devising a good project
design stems from your system architecture and doing it properly is a hard-core
engineering task – designing the project. This design task requires both the
project manager and the architect to work closely together to determine the
best overall plan.
Project Design

A Call for Action
By
Juval Löwy
74
The second misconception is that project design contra-
dicts Agile-like development processes. Nothing could be
further from the truth. Project design, like architecture, is
not mutually-exclusive with Agile. We know that you can
design a system in a preliminary sprint focused on archi-
tecture, even though you don’t know exactly what you are
going to build. You can do the same with project design.
Note that both architecture and project design are activi-
ties, while Agile is a development process.

The third misconception is that project design is water-
fall-like. In reality, project design is the opposite of the
waterfall - it forces you to face head-on the realities of
interleaving all the various activities in the project, across
developers, completion phases, iterations and milestones,
and design the project as you are going to build it, as
opposed to a fi ctional waterfall.

The last misconception is that project design is time con-
suming, and under an aggressive deadline you can't afford
it. Don’t be concerned. A seasoned architect can design
the project in a matter of a few days for each given set of
planning assumptions.

PROJECT DESIGN AND PROJECT SANITY
Project design allows you to shed light on dark corners, and
have up-front visibility on the true scope of the project.
Project design forces managers to think through work
before it begins, to recognize unsuspected relationships
and limits, to represent all activities, and to recognize sev-
eral options for building the system. It allows the organiza-
tion to determine whether it even wants to get the project
done. After all, if the true cost and duration will exceed the
acceptable limits, why start the work in the fi rst place, only
to have the project canceled once you run out of money or
time. As such, once project design is in place, you eliminate
the commonplace gambling, death marches, the wishful
thinking and the horrendously expensive trial and errors. A
well-designed project also lays the foundation for enabling
managers to evaluate and think through impact of a change
once work commences, and then to assess the impact of
the change request on the schedule and the budget. This
enables the project manger and the architect to keep the
project on time, all the time.
ASSEMBLY INSTRUCTIONS
Yet there is much more to project design than proper
decision making. The project design also serves as the
system assembly instructions. To use an analogy, would
you buy a well-designed IKEA set without the assembly
instructions? Regardless of how comfortable or conveni-
ent the furniture is, you will recoil at the mere though of
trying to guess where each of the hundreds of pins, bolts,
screws, and plates go, and in which order. Your software
system is signifi cantly more complex, and yet architects
presume developers and projects managers can just go
about assembling the system, fi guring it out as they go
along, without mistakes. While that is possible, it is clearly
not the most effi cient way of assembling the system. What
project design produces is the project plan, much as the
system design activity produces the architecture. If the
architecture is the "what", the project plan is the "how"
- your system assembly instructions. The very nature of
producing the assembly instructions is yet another reason
why project design is an engineering task rather than a
project management one.
HW
DA
Client App
Emulator
Setup
Queue
Command
S1
S2
S3
S5
System
Handler
Config
01/12
01/28
02/01
03/22
01/13
04/05

05/05

S4
Juval Löwy is the founder of IDesign
and a master software architect
specializing in system and project
design. Juval has mentored hundreds
of architects across the globe, sharing
his insights, techniques, and break-
throughs, in architecture, project
design, development process, and
technology. Juval is Microsoft’s
Regional Director for the Silicon Valley
and had participated in the Microsoft
internal strategic design reviews for
C
#
, WCF and related technologies.
Juval is a frequent speaker at the major
international software development
conferences. Juval published several
bestsellers, and his latest book is the
third edition of Programming WCF
Services (O'Reilly 2010). Juval published
numerous articles, regarding almost
every aspect of modern software
development and architecture.
Microsoft recognized Juval as a
Software Legend as one of the world's
top experts and industry leaders.
A SYSTEM OF OPTIONS
What may not be obvious from the discussion so far is
that a given project design is not a single pair of sched-
ule and cost. It is a very different project if you only have
one developer or six, if you are allowed to engage sub-
contractors at key phases, if you try to build the system
as fast as you can or with the least possible cost, or if you
try to avoid risks and maximize the probability for success.
When you design a project you must provide management
with several options trading cost, schedule and risk, allow-
ing management and decision makers to choose up-front
the solution that best fi ts their needs and expectations.
Providing options in project design is in fact the key to suc-
cess. Clearly, if time is of no consequence then you should
build project at lowest cost. If cost is of no consequence
then you must build the project in the fastest way pos-
sible. But in reality, cost and schedule always matter, and
the best solution is found between these two extremes.
Finding a balanced solution and even an optimal solution
is a highly engineered design task. I say it is engineered
because engineering is all about tradeoffs and accom-
modating reality. With project design, there is no single
correct solution even for the same constraints, much as
there are several possible design approaches for any sys-
tem. It is the task of the project manager and the architect
to narrow this spectrum to several good options for man-
agement to choose from. Specifi cally, when you design a
project you must provide with reasonable certainty design
options such as:
• A normal solution not subjected to constraints
• The most economical way to meet the deadline
• The fastest way to deliver on set cost
• A constrained solution with limited resources
• How these options vary with regard to quantifi ed risk
Often you will have to even recommend the best over-
all plan across possible architectures, schedules, cost,
resources and risk.
If you don't provide these options, you will have none to
blame but yourself. How common it is for the architect to
labor on the design of the system, present it to manage-
ment, only to have them ordain "You have a year and four
developers". Any correlation between that year and the
four developers and what it really takes to deliver the sys-
tem is accidental, and so are your chances of success. But if
the same architect were to present the same architecture,
accompanied by 3-4 project design options, all of them
doable, but refl ecting different trade of cost, schedule and
risk? Now the discussion revolves around which of these
options to choose out of the menu. I call this strategy "Cake
or Pie". Failure however, is not on the menu.

CALL FOR ACTION
I wrote three pages so far extolling the virtues of project
design, and yet I have said nothing about how to go about
doing it. That is deliberate - there is no way I can do project
design justice in three pages. But it takes only a few days to
learn some basic skills of project design, enough to make
a huge difference. The same is true with software archi-
tecture - armed with basic understanding of encapsulation
and decoupling you can do wonders. On the other hand, the
body of knowledge of project design is as wide and deep as
that of system architecture. It also takes a commensurable
amount of time and experience to do well and fast, just as
with system architecture.
Consequently I frequently witness a cognitive bias in our
industry: most people assume that because they can't do
project design or even worse, because they have never
seen it done properly, then it can't be done. Let me assert
here and now that is nonsense. The same could have been
said about distributed systems architecture twenty years
ago. There are plenty of well-designed software projects.
I have personally spent comparable time over the years
with the IDesign customers on their system architecture
and the project design supporting it. I have educated and
mentored hundreds of architects and project leads on
project design as well as my own techniques and break-
throughs. The results speak for themselves, with success
story after success story. Don't be fooled by the common
cognitive bias. Remember, that the absence of evidence
is not evidence of absence. Do invest in learning more and
then mastering the crucial act of project design. It will set
your career on a different trajectory, restore confi dence
between managers, developers and customers, improve
communication all around, reduce overall tension, and
greatly increase your chance of success.
75
76
All Access Pass (All 5 days)
£1,800
All Access Pass (All 5 days) with Hotel*
£2,150
1 Day Conference Pass
£800
2 Day Conference Pass
£950
3 Day Conference Pass
£1,100
3 Day Conference Pass with Hotel*
£1,350
1 Day Pre-Conference Workshop Pass
£600
2 Day Pre-Conference Workshop Pass
£950
Buy your tickets now!
Ticket type Price
GET UPDATED!
Agile & Development
ndc-london.com
DESCRIPTION
During the ScrumMaster class,
attendees will learn why such a
seemingly simple process as Scrum
can have such profound effects on an
organization. Participants gain
practical experience working with
Scrum tools and activities such as
the product backlog, sprint backlog,
daily Scrum meetings, sprint plan-
ning meeting, and burndown charts.
Participants leave knowing how to
apply Scrum to all sizes of projects,
from a single collocated team to a
large, highly distributed team.
YOU WILL LEARN
Practical, project–proven
practices
The essentials of getting a
project off on the right foot
How to write user stories for
the product backlog
Why there is more to leading a
self–organizing team than buying
pizza and getting out of the way
How to help both new and
experienced teams be
more successful
How to successfully scale Scrum
to large, multi–continent projects
with team sizes in the hundreds
Tips and tricks from the instructors
ten–plus years of using Scrum in a
wide variety of environments
COURSE DATE
Oslo: 9 December, 10 March
London: 3 December
As you watch the product take
shape, iteration after iteration, you
can restructure the Product Backlog
to incorporate your insights or
respond to changes in business
conditions. You can also identify and
cancel unsuccessful projects early,
often within the fi rst several months.
The Certifi ed Scrum Product Owner;
course equips you with what you
need to achieve success with Scrum.
Intuitive and lightweight, the Scrum
process delivers completed incre-
ments of the product at rapid,
regular intervals, usually from every
two weeks to a month. Rather than
the traditional system of turning a
project over to a project manager
while you then wait and hope for the
best, Scrum offers an effective
alternative, made even more attrac-
tive when considering the statistics
of traditional product approaches in
which over 50% of all projects fail
and those that succeed deliver
products in which 64% of the
functionality is rarely or never used.
Let us help you avoid becoming one
of these statistics.
YOU WILL LEARN
Practical, project–proven
practices
How to write user stories for
the product backlog

Proven techniques for prioritizing
the product backlog
How to predict the delivery date
of a project (or the features that
will be complete by a given date)
using velocity
Tips for managing the key
variables infl uencing project
success
Tips and tricks from the
instructors fi fteen years of
using Scrum in a wide variety
of environments
COURSE DATE
Oslo: 11 December, 12 March
This two day course—taught by author and popular Scrum and agile trainer
Mike Cohn—not only provides the fundamental principles of Scrum, it also gives par-
ticipants hands–on experience using Scrum, and closes with Certifi cation as a recog-
nized ScrumMaster.
Certifi ed Scrum Product Owner Training teaches you, the product owner, how to use
the product backlog as a tool for success.
Courses with

MIKE COHN
CERTIFIED SCRUMMASTER - CSM – MIKE COHN
CERTIFIED SCRUM PRODUCT OWNER - CSPO – MIKE COHN
OSLO - www.programutvikling.no
LONDON - www.developerfocus.com
78
COURSE OVERVIEW OSLO
www.programutvikling.no
ProgramUtvikling
The offi ce and course rooms are located in the IT-Fornebu
technology park approximately 10 minutes away from central Oslo.
Address: Martin Lingesvei 17-25, 1367 Snarøya.
Tel.: +47 67 10 65 65 - Fax: +47 67 82 72 31
www.programutvikling.no - info@programutvikling.no
C++ Days Nov Des Jan Feb Location Price
Advanced C++ programming - Hubert Matthews 4 Kongsberg 21900
C++11 programming - Hubert Matthews 3 02 Oslo 18900
C++11 programming - Hubert Matthews 3 Kongsberg 18900
C Days Nov Des Jan Feb Location Price
Deep C: Et kurs for erfarne C og C++ programmerere - Olve Maudal 2 24 Oslo 14900
XML Days Nov Des Jan Feb Location Price
Exchanging and Managing Data using XML and XSLT - Espen Evje 3 15 Oslo 18900
DATABASE Days Nov Dec Jan Feb Location Price
Databasedesign, -implementering og SQL-programmering - Dag Hoftun Knutsen 4 Oslo 21900
PROGRAMMING Days Nov Dec Jan Feb Location Price
Objektorientert utvikling - Eivind Nordby 4 Oslo 21900
DEVOPS Days Nov Dec Jan Feb Location Price
DevOps - Alex Papadimoulis 3 16 Oslo 18900
COURSETITLE
AGILE Days Nov Dec Jan Feb Location Price
Kanban Software Development - Allan Kelly 2 03 Oslo 14900
Smidig utvikling med Scrum - Arne Laugstøl
1
Oslo 6900
SCRUM Days Nov Dec Jan Feb Location Price
Certifi ed Scrum Product Owner - CSPO - Mike Cohn 2 11 Oslo 14900
Certifi ed ScrumMaster - CSM - Geoff Watts 2 28 Oslo 14900
Certifi ed ScrumMaster - CSM - Mike Cohn 2 09 Oslo 14900
TEST-DRIVEN DEVELOPMENT Days Nov Dec Jan Feb Location Price
Test-Driven Development & Refactoring Techniques - Robert C. Martin 3 02 Oslo 18900
Test-Driven Development - Venkat Subramaniam 5 27 Oslo 24900
Test-Driven JavaScript - Christian Johansen 3 20 Oslo 18900
DESIGN - ANALYSIS - ARCHITECTURES Days Nov Dec Jan Feb Location Price
Evolutionary Design and Architecture for Agile Development - Venkat Subrama-
niam
4 09 17 Oslo 21900
Architecture Skills - Kevlin Henney 3 27 Oslo 18900
MOBILE APPLICATIONS Days Nov Dec Jan Feb Location Price
Android Developmer Training - Wei-Meng Lee 5 09 Oslo 24900
Writing Cross Platform iOS and Android Apps using Xamarin and C# - Wei-Meng
Lee
3 04 Oslo 18900
Fundamentals of iOS Programming - Wei-Meng Lee 5 Oslo 24900
MICROSOFT Days Nov Dec Jan Feb Location Price
70-513 - WCF 4.5 with Visual Studio 2012 - Sahil Malik 5 Oslo 24900
Anchor Webcamp - Jon Galloway 1 09 Oslo Free
C
#
.NET: Utvikling av applikasjoner i .NET med C
#
- Arjan Einbu
5 11 09 20 Oslo 24900
ASP.NET Web API & SignalR - Christian Weyer 2 09 Oslo 14900
Claims-based Identity & Access Control for .NET 4.5 Applications
- Dominick Baier
2 22 Oslo 14900
Creating Windows 8 Apps using C
#
and XAML - Gill Cleeren
3 Oslo 18900
Creating Windows 8 Apps using HTML5 and JavaScript - Christian Wenz 3 Oslo 18900
Web Development in .NET - ASP.NET MVC , HTML5, CSS3, JavaScript
- Arjan Einbu
5
16
27
Oslo 24900
WPF/XAML - 70-511 / 10262A Windows Presentation Foundation/XAML
- Arne Laugstøl
3 19 03 Oslo 21900
Developing Windows Azure and Web Services - Magnus Mårtensson 5 11 Oslo 24900
SHAREPOINT Days Nov Dec Jan Feb Location Price
SharePoint 2013 and Offi ce 365: End to End for Developers - Sahil Malik 5 13 Oslo 24900
JAVA Days Nov Dec Jan Feb Location Price
Core Spring - Mårten Haglind 4 25 Oslo 21900
Effective Scala - Jon-Anders Teigen 3 19 Oslo 18900
Programming Java Standard Edition - Peet Denny 5 Oslo 24900
Java EE Remastered - Mårten Haglind 4 16 Oslo 21900
Understanding Advanced Java & The Java Virtual Machine - Ben Evans 3 19 Oslo 18900
HTML5 - JavaScript - CSS3 Days Nov Dec Jan Feb Location Price
JavaScript and HTML5 for Developers - Christian Wenz 3 06 Oslo 18900
JavaScript for programmers - Christian Johansen 3 11 Oslo 18900
Test-Driven JavaScript with Christian Johansen 3 20 Oslo 18900
AngularJS and HTML5 - Scott Allen 4 10 Oslo 21900
Writing Ambitious Webapps with Ember.js - Joachim Haagen Skeie 3 Oslo 18900
Invitation to
Anchor Webcamp

with Jon Galloway

Introduction to ASP.NET and
Visual Studio 2013 Web Tooling
9. December at IT Fornebu
Free course. Register on www.programutvikling.no
80 81
COURSE OVERVIEW LONDON
COURSE LOCATION, LONDON
De Vere’s Holborn Bars is one of the UK’s leading conference centres in London. Step through
the imposing doors of Holborn Bars and sense you’re entering somewhere rather special – an
inspirational place to hold conferences, meetings, training and assessment centres. Set in its
bustling backdrop located near London, makes a lasting impression on guests and delegates
while providing a comfortable and inspiring surrounding. You will be provided with cool water
& mints in all rooms, freshly brewed tea & coffee with pastries & biscuits during your breaks
and delicious lunches with a different menu prepared every day. In the lobby area there are
PCs to catch up on some emails during breaks and free internet access is also available.
Arriving by Train and Tube
The property is located on the north side of Holborn, close to its junction with Gray’s Inn
Road. It benefi ts from excellent public transport links with Chancery Lane underground sta-
tion, Farringdon station and City Thameslink station all in close proximity.
Bus
You can get to Holborn Bars by bus from London Bridge mainline station. The 521 Bus route
stops opposite just outside the Sainsbury’s headquarters and Holborn Bars is a little further
ahead on the other side of the road.

ADDRESS
Holborn Bars. 138–142 Holborn, London
EC1N 2NQ
ENROLMENT OPTIONS
www.developerfocus.com
info@developerfocus.com
Tel.: 0843 523 5765
DeveloperFocus
© Mark S/ Wikipedia
COURSETITLE
AGILE & SCRUM Days Nov Dec Jan Feb Location Price
Agile Estimating and Planning Training - Mike Cohn 1 05 London £700
Certifi ed Scrum Master - CSM - Geoff Watts 2 26 London £1195
Certifi ed Scrum Master - CSM - Mike Cohn 2 02 London £1400
Certifi ed Scrum Product Owner - CSPO - Geoff Watts 2 28 London £1195
Certifi ed Scrum Product Owner - CSPO - Mike Cohn 2 London £1400
Effective User Stories for Agile Requirements Training - Mike Cohn 1 04 London £700
Succeeding with Agile - Mike Cohn 2 London £1400
Working on a Scrum Team with Kenny Rubin 3 03 London £1400
Writing Effective User Stories Training with Kenny Rubin 2 06 London £1195
ARCHITECTURE - DESIGN - SECURITY - MASTERCLASSES Days Nov Dec Jan Feb Location Price
The Architect`s Master Class with Juval Lowy 5 London £2500
Project Design Master Class with Juval Löwy 5 24 London £2500
C++ Days Nov Dec Jan Feb Location Price
Advanced C++ Programming - Hubert Matthews 4 London £1795
MICROSOFT .NET Days Nov Dec Jan Feb Location Price
C
#
.NET: Developing applications in .NET with C
#
- Gill Cleeren
5 25 London £1995
Creating Windows 8 Metro Apps using C
#
and XAML - Gill Cleeren
3 10 London £1495
Web Development in .NET - ASP.NET MVC, HTML5, WebAPI and SPA - Scott Allen 5 09 London £2500
ASP.NET Web API & SignalR - Lightweight web-based architectures for you!
With Christian Weyer
2 13 London £1195
MOBILE APPLICATIONS Days Nov Dec Jan Feb Location Price
Fundamentals of Android Programming with Wei-Meng Lee 5 London £1995
Fundamentals of iOSProgramming with Wei-Meng Lee 5 18 London £1995
JAVA Days Nov Dec Jan Feb Location Price
Understanding Advanced Java & The Java Virtual Machine (JVM) with Ben Evans 3 27 London £1495
Python for Programmers with Marilyn Davis, Ph.D.3 19 London £1495
SHAREPOINT Days Nov Dec Jan Feb Location Price
SharePoint 2013 and Offi ce 365: End to End for Technical Audience - Sahil Malik 5 London £1995
HOW TO FIND US PROGRAMUTVIKLING MAIN OFFICE, OSLO
ProgramUtvikling AS is located in the IT-Fornebu
technology park at Snarøya.
ADDRESS
Martin Lingesvei 17-25, 1367 Snarøya.
Our offi ces and course rooms are situated in the
terminal building of the former Oslo airport.
The photo shows the car park, bus stop and course
rooms. For details of bus times, go to trafi kanten.no
Entrance
Parking
Bus stop
FOLLOW US ON
twitter.com/progutvikling www.programutvikling.no/feed/news.aspx nyheter@programutvikling.no facebook.com/programutvikling
ENROLMENT OPTIONS
www.programutvikling.no - info@programutvikling.no
Tel.: +47 67 10 65 65 - Fax: +47 67 82 72 31
VISION
To provide the world’s best courses
for developers and managers.
BUSINESS CONCEPT
To offer developer courses in the
fi eld of IT that provide practical and
useful expertise.
CLEAR STRATEGY
We have a transparent strategy that
aims to provide courses for develop-
ers and project managers - and that
alone. We do not attempt to sell
products or consultancy services
alongside.
COURSES OFFERED
We hold advanced courses in, for
example, C
#
, .NET, SQL, UML, XML,
Java, Maven, TDD and Spring. In
project management, we offer the
most popular methodologies within
agile development, including Scrum
and Lean. Our unique combination of
custom courses developed in-house
and standard courses means that we
can help reduce the number of
course days and costs for our
customers.
A unique course provider!
Within the space of just a few years, we have established ourselves
as Norway’s leading provider of courses for IT developers and project
managers. We are purely an independent course provider and we do not
sell other products or services. Our ambition is to ensure that participants
get the maximum out of their course, so they can make substantial
practical use of the training in their daily work.
Our vision is to provide the world’s best courses
for developers and managers
1-day Workshop £600 2-day Workshop £950
All Access Pass (All 5 days) £1800
All Access Pass (All 5 days) with Hotel £2150
Accelerated Agile: from months to minutes
3 Dec
JavaScript: Getting your feet wet
2 Dec
How to approach Refactoring
3 Dec
Continuous Delivery
3 Dec
AngularJS workshop
2 - 3 Dec
ASP.NET Web API & SignalR: Lightweight
web-based architectures for you - 2-3 Dec
SharePoint 2013 App Development
2 Dec
Claims-based Identity & Access Control for .NET
2-3 Dec
+
Secure Codiing and Design
(C#/.NET Framework) - 2-3 Dec
SAHIL
MAILK
CHRISTIAN
WEYER
BROCK
ALLEN
JEZ
HUMBLE
JON
MCCOY
SCOTT
ALLEN
DOMINICK
BAIER
VENKAT
SUBRAMANIAM
VENKAT
SUBRAMANIAM
DAN
NORTH
Pre-conference workshops

2-3 December,ExCel Arena
ndc-london.com
8584
Agile Cloud Database Architecture Devops Miscellaneous Microsoft
Mobile Programming Languages Security Tools UX Web Testing
Design Fun
Timeslots Room 1 Room 2 Room 3 Room 4 Room 5 Room 6 Room 7 Workshops
09:00 - 10:00
10:00 - 10:20 Break
10:20 - 11:20 ASP.NET Web
API 2: HTTP
Services for
the Modern
Web and
Mobile
Applications
Daniel Roth

What’s wrong
with my site?
Lean DevOps
for Azure
Michele
Leroux
Bustamante
Why Agile
doesn’t scale
- and what you
can do about it
Dan North
Straight-Up
Design
Jen Myers
Async in C
#
5
Jon Skeet
EventStore -
an introduc-
tion to a DSD
for event
sourcing and
notifications
Liam Westley
Data-Bind
everything!
Stuart Lodge

11:20 - 11:40 Break
11:40 - 12:40 Scripting your
web API
development
using scriptcs
Glenn Block

Simple.Web
101
Mark Rendle

I want to
develop for
SharePoint,
but I don’t
know where
to start
Sahil Malik

Rx and the
TPL: Cats and
Dogs Living
Together??
Paul Betts

Making Magic:
Combining
Data,
Information,
Services and
Programming,
at Internet-
Scale
Don Syme
The
Development
Landscape
Is Moving
Extremely
Fast! Here’s
How To Move
Faster.
Jon Galloway

ECMAScript 6:
what’s next for
JavaScript?
Axel
Rauschmayer

12:40 - 13:40 Break
13:40 - 14:40 The missing
link
– hypermedia
in Web API.
Darrel Miller

Doing data
science with
F
#
Tomas
Petricek
Programming
Clojure
Venkat
Subramaniam
Adopting
Continuous
Delivery
Jez Humble
Introduction
to Windows
Azure Part
Scott Guthrie
CQRS with
Erlang
Bryan Hunter

Callable
entities in
ECMAScript 6
Axel
Rauschmayer

14:40 - 15:00 Break
15:00 - 16:00 API Client
library V2
Darrel Miller
Windows
Azure
Essentials
Michele
Leroux
Bustamante
To Estimate
or Not to
Estimate –
is that the
question?
Marit Rossnes

The Lean
Enterprise
Jez Humble

Introduction
to Windows
Azure Part II
Scott Guthrie
The tale about
tiles
Gill Cleeren

Semantics
matter
Jon Skeet
16:00 - 16:20 Break
16:20 - 17:20 ASP.NET and
OWIN - Better
Together
Daniel Roth
Learning from
Noda Time: a
case study in
API design and
open source
(good, bad
and ugly)
Jon Skeet

Testing of
Mobile Apps
with Calabash
and Xamarin
Test Cloud
Karl Krukow

Asynchrony in
Node.js, let me
count the
Glenn Block

You Thought
You
Understood
Multithreading
Gael Fraiteur
Herding Code
Live
Herding Code
Live
ASP.NET MVC,
you’re doing it
wrong. An
Introduction
to Nancy
Mathew
McLoughlin
Agile Team
Interactions
Workshop
Jessie
Shternshus
17:20 - 17:40 Break
17:40 - 18:40 Hidden
Complexity:
Inside the
Simple code
Mark Rendle
A good
SharePoint
development
machine
Sahil Malik

Jackstones:
the Journey
to Mastery
Dan North

NDC Cage
Match:
Dynamic vs.
Static with
Gary Bernhardt
and Jon Skeet.
Gary
Bernhardt,
Jon Skeet and
Rob Conery
Throw away
those training
wheels,
JavaScript
without the
frameworks
Rob Ashton
Democratizing
event
processing at
all scales and
platforms with
Reactive
Extensions
(Rx)
Matthew
Podwysocki

Advanced
techniques
to implement,
customize
and manage
a Nancy
application
Damian Hickey
Timeslots Room 1 Room 2 Room 3 Room 4 Room 5 Room 6 Room 7 Workshops
09:00 - 10:00 A deep dive into
the ASP.NET
Web API
runtime
architecture
Pedro Félix

Modern
Architecture
patterns for
the cloud
John S Azariah
and Mahesh
Krishnan

Build Real
World Cloud
Apps using
Windows Azure
Part I
Scott Guthrie
Full Text
Searching &
Ranking with
ElasticSearch
and C
#

JP Toto
The Hitchhikers
guide to Object
Orientated
JavaScript
Martin Beeby

Windows Phone
8 Networking:
HTTP,
Bluetooth
and NFC
Andy Wigley

jQuery Mobile
and ASP.NET
MVC
Scott Allen

10:00 - 10:20 Break
10:20 - 11:20 Pragmatic ASP.
NET Web API
Solutions
- beyond
ValuesCon-
troller
Christian Weyer
OWASP Top Ten
2013
Christian Wenz

Build Real
World Cloud
Apps using
Windows
Azure Part II
Scott Guthrie
Poka-yoke
code: APIs
for stupid
programmers
Mark Seemann
How WebGL
lets you write
massively
parallel GPU
code for the
browser
Steve
Sanderson
Building for
Windows 8 and
Windows Phone
8 with .NET and
MVVM
Mike Taulty



Secrets of
Awesome
JavaScript API
Design
Brandon
Satrom


11:20 - 11:40 Break
11:40 - 12:40 Securing ASP.
NET Web API
(v2)
Dominick Baier

Battle of
the Mocking
Frameworks
Dror Helper
AngularJS – an
Introduction for
Windows
developers
Ingo Rammer
The Reasonable
Expectations of
your new CTO
Robert C.
Martin

Automating
Testing in the
big, bad
Enterprise
World
Jeremy D. Miller
Reactive
meta-program-
ming with drones
Jonas Winje,
Einar W. Høst
and Bjørn Einar
Bjartnes
Stop Hiring
Devops Experts
(And Start
Growing Them)
Jez Humble

12:40 - 13:40 Break
13:40 - 14:40 A technical
Introduction to
OAuth2,
OpenID
Connect and
JSON Web
Tokens: The
Security Stack
for modern
Applications
Dominick Baier

Scala for C
#

Developers
James Hughes
Hybrid &
powerful mobile
apps with
PhoneGap &
AngularJS
Christian Weyer

Software
Project Design
Juval Lowy
Teaching Our
CSS To Play
Nice
Jen Myers

ZeroMQ - A
Whole Bunch
of Awesome
[C
#
Edition]
Ashic Mahtab

Creating Killer
Windows Phone
Apps
Andy Wigley

14:40 - 15:00 Break
15:00 - 16:00 AngularJS
DIrectives And
The Computer
Science Of
JavaScript
Burke Holland
Internals of
security in
ASP.NET
Brock Allen


Windows 8.1
Apps with
JavaScript 101
Christian Wenz

A Functional
architecture
with F
#
Mark Seemann

Architecting,
designing and
coding for
change:
Applying the
Life Preserver
to build
adaptable
software
Russ Miles


Functional
Programming:
What? Why?
When?
Robert C.
Martin


C++ style
Hubert
Matthews

16:00 - 16:20 Break
16:20 - 17:20 Actor based
programming
using Orleans
Richard Astbury

Identity
management
in ASP.NET
Brock Allen
AngularJS talk I
Pete Bacon
Darwin
Yes, you can!
Tooling and
Debugging for
JavaScript
Ingo Rammer

How I Learned
to Stop
Worrying and
Love Flexible
Scope
Christian Hassa
and Gojko Adzic
Programming
Diversity
Ashe Dryden
Domain driven
design with the
F
#
type system
Scott Wlaschin


Rapid Problem
Solving
Workshop
Jessie
Shternshus
17:20 - 17:40 Break
17:40 - 18:40 WEB API Panel
Debate
.NET Rocks!

From audience
member to
speaker..
Managing
technical
presentations
Niall Merrigan
Putting the
Microsoft
Design
Language to
work
Laurent
Bugnion
TBA
Don Syme
Defending
.NET/C
#

Applications
- Layered
Security
Jon McCoy
Zen of
Architecture
Juval Lowy
Test Your
Javascript...
with the Help
of D&D
Tim G. Thomas

PROGRAM – Thursday
Agile Cloud Database Architecture Devops Miscellaneous Microsoft
Mobile Programming Languages Security Tools UX Web Testing
Design Fun
PROGRAM – Wednesday
86
Timeslots Room 1 Room 2 Room 3 Room 4 Room 5 Room 6 Room 7 Workshops
09:00 - 10:00 The C Word--
Continuous
Brian A. Randell


Knocking it out
of the park, with
Knockout.JS
Miguel A.
Castro
Agile software
architecture
sketches
Simon Brown

Securing a
modern
Java-Script
based single
page web
application
Erlend Oftedal

PreCrime -
The Future of
Policing in the
Era of Austerity
Gary Short
TBA
Geoff Watts
Hunting ASP.
NET Perfor-
mance Bugs
Tiberiu Covaci
10:00 - 10:20 Break
10:20 - 11:20 ASP.NET
SignalR 2.0
(and beyond)
Damian
Edwards and
David Fowler
Native Cross
Platform Mobile
Development
with Xamarin 1
Xamarin
BDD for
financial
calculations:
Excel vs.
Given-When-
Then, or both?
Gáspár Nagy
The Future
of C
#

Mads Torgersen
TBA
TBA
Getting Agile
with Scrum
Mike Cohn

File -> new
project to
deploy in 10
minutes with
TeamCity and
Octopus Deploy
Tomas Jansson
11:20 - 11:40 Break
11:40 - 12:40 Doing SPA
with MVC &
Knockout.JS
Miguel A.
Castro
Nuts and bolts
of writing OWIN
middleware
Louis Dejardin
It’s a Kind
of Magic
Andy Clymer
NDC Cage
Match: Testing!
NodeJS vs. C
#

and .NET with
Rob Ashton and
Jeremy Miller
Jeremy D. Miller,
Rob Ashton and
Rob Conery

The Birth
& Death of
JavaScript
Gary Bernhardt

Agile
Estimating
Mike Cohn
Build It So You
Can Ship It!
Brian A. Randell

12:40 - 13:40 Break
13:40 - 14:40 People,
Process, Tools
– The Essence
of DevOps
Richard
Campbell
Authentication
Middleware for
OWIN
Louis Dejardin

Kinect
Carl Franklin
Object Oriented
Design in the
Wild
Jessica Kerr
Making 3D
games with
Monogame
Richard Garside


User Stories
for Agile
Requirements
Mike Cohn
MongoDB
for the C
#

developer
Simon Elliston
Ball
14:40 - 15:00 Break
15:00 - 16:00 Using ASP.NET
SignalR...
in anger
Damian
Edwards and
David Fowler
Native Cross
Platform Mobile
Development
with Xamarin 2
Xamarin
Vagrant, the
ability to think
about
production
deployments
from day 1 of
development
Paul Stack
Where is the
work hiding?
Marcin Floryan
SAY ‘GO
AHEAD’ ONE
MORE TIME!
I DARE YOU
Rob Conery
Advanced
Topics in Agile
Planning
Mike Cohn
TBA
TBA
16:00 - 16:20 Break
16:20 - 17:20 Cleaning up
Code Smell
Venkat
Subramaniam
BDD all the way
down
Enrico
Campidoglio
Exploring the
C
#
scripting
experience with
scriptcs
Justin Rusbatch


Demystifying
the Reactive
Extensions
Matt Ellis
Bitcoin and
The Future of
Cryptocurrency
Ben Hall
Scaling Agile to
Work with a
Distributed
Team
Mike Cohn
TBA
TBA
PROGRAM – Friday
Agile Cloud Database Architecture Devops Miscellaneous Microsoft
Mobile Programming Languages Security Tools UX Web Testing
Design Fun
Vi tror at interesser og lidenskap er viktig – både på og utenfor jobb.
Nysgjerrige mennesker oppnår rett og slett mer. I BEKK får du mulighet
til å utvikle deg videre, både som menneske og fagperson.
Sjekk ut ulike jobbmuligheter og hele vår faglige bredde på våre nettsider.
JOBB I BEKK ?
www.bekk.no