In computer science terminologies, imperative programming is a programming paradigm that describes computation in terms of statements that change a program state.
FUNctional Programming
In computer science, functional programming is a programming paradigm—a style of building the structure and elements of computer programs—that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data.
Clarifications
The important part is the programming abstraction, of course every runtime computation is using mutability somewhere
Is a style of programming, can be done in any language
Often easier or more efficient to use a language designed for FP
… but JavaScript probably isn't doing you any favors either ;)
Popularity Cons
Fewer learning resources available
Unlikely to find a fancy IDE (unless Emacs counts)
Fewer programmers available, may be harder to hire … but you may find better ones (Blub Paradox)
Fewer open source libraries available … but maybe you waste less time on bad ones ;)
Technical Cons
Garbage Collection is basically a hard requirement … but unless you're still using C++, this is status quo
Pure algorithms may have an extra n or log n time cost … but there's usually a way to cheat
Shared state (e.g. configuration) can be cumbersome … but globals are bad practice anyway
Pre-History
1930s-1950s
Lambda Calculus (Church)
Combinatory Calculus (Curry & Feys)
LISP (McCarthy)
1960s-1970s
Operational (Landin) and Denotational (Strachey) semantics
ML (Milner)
Lazy FP & Graph Reduction (Turner)
1980s
Miranda (Turner), Lazy ML (Augustsson & Johnsson)
Abbreviated History
1987
Erlang experiments at Ericsson's R&D lab
Haskell committee formed
1990
Haskell 1.0
1998
Erlang open sourced
2004
Scala public release
2007
Clojure public release
Why so long?
Compiling FP style code to efficient machine code is a harder problem than adding layers of abstraction to how the machine already works.
FP languages haven't had as many corporations pushing their adoption.
Why now?
The imperative languages we have are a mess.
Particularly with regard to concurrency and parallelism.
Even embedded devices are multi-core today.
FP can make multi-core and distributed systems easier to build.
Pros
Code is more declarative; what to do, not how to do it.
Erlang and Haskell have cheap concurrency, no more callback spaghetti.
Erlang was designed for uptime. Introspection, resiliency and upgrade paths are built-in.
Declarative
Describe the problem, not the solution.
# merge_sort.pydef merge_sort(lst):
if not lst:
return []
lists = [[x] for x in lst]
whilelen(lists) > 1:
lists = merge_lists(lists)
return lists[0]
def merge_lists(lists):
result = []
for i in range(0, len(lists) // 2):
result.append(merge2(lists[i*2], lists[i*2 + 1]))
iflen(lists) % 2:
result.append(lists[-1])
return result
def merge2(xs, ys):
i, j = 0, 0
result = []
while i < len(xs) and j < len(ys):
x, y = xs[i], ys[j]
if x > y:
result.append(y)
j += 1else:
result.append(x)
i += 1
result.extend(xs[i:])
result.extend(ys[j:])
return result
moduleMergeSort (mergeSort) where-- | Bottom-up merge sort.mergeSort ::Ord a => [a] -> [a]
mergeSort = mergeAll . map (:[])
where
mergeAll [] = []
mergeAll [xs] = xs
mergeAll xss = mergeAll (mergePairs xss)
mergePairs (a:b:xs) =
merge a b : mergePairs xs
mergePairs xs = xs
merge as@(a:as') bs@(b:bs')
| a > b = b : merge as bs'
| otherwise = a : merge as' bs
merge [] bs = bs
merge as [] = as
Erlang has convenient bit syntax for parsing binary data
%% This parses a TCP packet header (IPv4)!<<SourcePort:16,DestinationPort:16,SequenceNumber:32,AckNumber:32,DataOffset:4,_Reserved:4,Flags:8,WindowSize:16,Checksum:16,UrgentPointer:16,Payload/binary>>=Segment,OptSize=(DataOffset-5)*32,<<Options:OptSize,Message/binary>>=Payload,<<CWR:1,ECE:1,URG:1,ACK:1,PSH:1,RST:1,SYN:1,FIN:1>>=<<Flags:8>>,%% Can now process the Message according to the%% Options (if any) and the flags CWR, …, FIN etc.
Cheap Concurrency
Immutable data is lock-free, no deadlocks if there are no locks.
<3 KB minimum per thread (process in Erlang terminology)
High performance IO multiplexing built-in
Can have millions of threads, even more than one per socket
RAM footprint per unit of concurrency (approx)
1.3KB
Haskell ThreadId + MVar (GHC 7.6.3, 64-bit)
2.6 KB
Erlang process (64-bit)
8.0 KB
Go goroutine
64.0 KB
Java thread stack (minimum)
64.0 KB
C pthread stack (minimum)
1 MB
Java thread stack (default)
8 MB
C pthread stack (default, 64-bit Mac OS X)
Erlang is Multi-core
One scheduler per core, scales well to 32+ cores
Better scalability to more cores is in-progress
Schedulers understand IO (disk, network calls)
No implicit synchronization
%% Parse HTTP headers from Socketheaders(Socket,Request,Headers)->ok=inet:setopts(Socket,[{active,once}]),receive{http,_,http_eoh}->%% All of the HTTP headers are parsedhandle_request(Socket,Request,Headers);{http,_,{http_header,_,Name,_,Value}}->headers(Socket,Request,[{Name,Value}|Headers]);{tcp_closed,_}->exit(normal);_Other->%% Invalid requestexit(normal)after?HEADERS_RECV_TIMEOUT->exit(normal)end.
Per-process heaps
No sharing
GC is per-process, and not "stop the world"!
Process references do not prevent GC
Explicitly hibernate idle processes for compaction
No More Async Callbacks
Only reason to use async is because threads are expensive
With cheap pre-emptive threads, you can write straightforward and performant code without inverting the control flow
Erlang exceptions propagate along linked processes