import functools as ft import numpy as np def BPTree(n, S, u, d): r = [np.array([S])] for i in range(n): r.append(np.concatenate((r[-1][:1]*u, r[-1]*d))) return r def GBM(R, P, S, T, r, b, v, n): t = float(T)/n u = np.exp(v * np.sqrt(t)) d = 1./u p = (np.exp(b * t) - d)/(u - d) ptree = BPTree(n, S, u, d)[::-1] R_ = ft.partial(R, np.exp(-r*t), p) return reduce(R_, map(P, ptree))[0] def American(D, p, a, b): return np.maximum(b, D*(a[:-1]*p + a[1:]*(1-p))) def VP(S, K): return np.maximum(K - S, 0) ABM = ft.partial(GBM, American)

There is a minor deviation from the q code: we are allowing d to be specified in BPTree. But otherwise, they are doing the same thing. Performance (as measured in ipython) isn’t too far-off either:

In [1]: from binomial import * In [2]: %timeit ABM(ft.partial(VP,K=102.0), 100.0, 1.0, 0.08, 0.08, 0.2, 1000) 10 loops, best of 3: 38.4 ms per loop In [3]: ABM(ft.partial(VP,K=102.0), 100.0, 1.0, 0.08, 0.08, 0.2, 1000) Out[3]: 6.2215001602514555

Note the similarity between the q and Python code. The similarity is a result of using NumPy and functools which enabled Python to perform array-oriented computation and partial function application. We did use a loop in BPTree as Python/NumPy does not seem to have the same “scan” operation as q. I suppose we could have created a numpy.ufunc to use accumulate()… but the loop felt cleaner and more Pythonic.

]]>

Constructing a binomial price tree is relatively easy in q:

BPTree:{[n;S;u] n{(x*y 0),y%x}[u]\1#S} / binomial price tree

where n is the depth of the tree, S is the current price and u is the scale of the up-move. We simply let the scale of the down-move be 1/u and hence the y%x in the expression.

The general binomial model can then be implemented as follow:

GBM:{[R;P;S;T;r;b;v;n] / General Binomial Model (CRR) t:T%n; / time interval u:exp v*sqrt t; / up; down is 1/u p:(exp[b*t]-1%u)%(u-1%u); / probability of up ptree:reverse BPTree[n;S;u]; / reverse binomial price tree first R[exp[neg r*t];p] over P ptree }

where R is a reduction function, P is the payoff function, S is the current price, T is the time to maturity, r is the risk-free rate, b is the cost of carry, v is the volatility and n is the depth of the tree. For American and European options, the reduction functions may be expressed as:

American:{[D;p;a;b] max(b;D*(-1_a*p)+1_a*1-p)} European:{[D;p;a;b] D*(-1_a*p)+1_a*1-p}

where D is the discount factor and p is the probability of an up-move. Consequently, we can express the American and European binomial models simply as:

ABM:GBM[American] EBM:GBM[European]

Testing the code on an American vanilla put option (strike = 102; price = 100; time to maturity = 1 year; risk-free rate = cost of carry = 8%; volatility = 20%; depth of tree = 1000):

q) VP:{[S;K]max(K-S;0)} / vanilla put: max(K-S,0) q) \t show ABM[VP[;102];100;1;0.08;0.08;0.2;1000] 6.2215 31

It took 31 ms to compute. Pretty nice for so little code.

Note: It turns out that this implementation of EBM is faster than the one in the previous post. The reason is that I avoided using the expensive xexp function this time round. Otherwise, the previous implementation should be faster since it only computes the payoffs at maturity and not the intermediate nodes.

]]>

EBM:{[P;S;K;T;r;b;v;n] / European Binomial Model (CRR) t:T%n; / time interval u:exp v*sqrt t; / up d:1%u; / down p:(exp[b*t]-d)%(u-d); / probability of up ns:til n+1; / 0, 1, 2, ..., n us:u xexp ns; / u**0, u**1, ... ds:d xexp ns; / d**0, d**1, ... Ss:S*ds*reverse us; / prices at tree leaves ps:pmf[n;p]; / probabilities at tree leaves exp[neg r*T]*sum P[Ss;K]*ps }

Note that P is the payoff, S is the current price, K is the strike price, T is the time to maturity, r is the risk-free rate, v is the volatility, b is the cost of carry and n is the depth of the binomial tree. The Python version using NumPy and SciPy actually looks quite similar:

def EuropeanBinomialModel(P, S, K, T, r, b, v, n): n = int(n) t = float(T)/n # time interval u = np.exp(v * np.sqrt(t)) # up d = 1/u # down p = (np.exp(b*t)-d)/(u-d) # probability of up ns = np.arange(0, n+1, 1) # 0, 1, 2, ..., n us = u**ns # u**0, u**1, ... ds = d**ns # d**0, d**1, ... Ss = S*us*ds[::-1] # prices at leaves ps = binom_pmf(ns, n, p) # probabilities at leaves return np.exp(-r*T) * np.sum(P(Ss,K) * ps)

As we can see, both code has no explicit loops. This is possible in Python as NumPy and SciPy are array-oriented. NumPy and SciPy’s idea of “broadcasting” has some similarity with k/q’s concept of “atomic functions” (definition: *a function f of any number of arguments is atomic if f is identical to f’*).

]]>

`scipy.stats.binom.pmf(x,n,p)`

. I though it would be great if I could have such a function in q. So a simple idea is to construct a binomial tree with probabilities attached. Recalling that a Pascal triangle is generated using `n{0+':x,0}\1`

, I modified it to get:
q)pmf:{[n;p]n{(0,y*1-x)+x*y,0}[p]/1#1f} q)pmf[6;0.3] 0.000729 0.010206 0.059535 0.18522 0.324135 0.302526 0.117649 q)sum pmf[1000;0.3] 1f

What is great about this method is that it is stable. Compared to SciPy 0.7.0, it was more accurate too (it is a known issue that older SciPy has buggy binom.pmf):

>>> scipy.stats.binom.pmf(range(0,41),40,0.3)[-5:] >>> array([3.33066907e-15, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.11022302e-16]) q)-5#pmf[40;0.7] 3.293487e-15 1.52594e-16 5.162955e-18 1.134715e-19 1.215767e-21

Unfortunately this method is too slow for large n. For large n, we need more sophisticated methods. For the interested reader, take a look at Catherine Loader’s Fast and Accurate Computation of Binomial Probabilities paper and an implementation of a binomial distribution in Boost

]]>

Wouldn’t it be great if there is a SerializableProperty class that can handle this automatically for us? It doesn’t exist but according to this article, it is easy to create our own customized Property classes. So here is a simple implementation of SerializableProperty that worked for me:

import cPickle as pickle import zlib from google.appengine.ext import db class SerializableProperty(db.Property): """ A SerializableProperty will be pickled and compressed before it is saved as a Blob in the datastore. On fetch, it would be decompressed and unpickled. It allows us to save serializable objects (e.g. dicts) in the datastore. The sequence of transformations applied can be customized by calling the set_transforms() method. """ data_type = db.Blob _tfm = [lambda x: pickle.dumps(x,2), zlib.compress] _itfm = [zlib.decompress, pickle.loads] def set_transforms(self, tfm, itfm): self._tfm = tfm self._itfm = itfm def get_value_for_datastore(self, model_instance): value = super(SerializableProperty, self).get_value_for_datastore(model_instance) if value is not None: value = self.data_type(reduce(lambda x,f: f(x), self._tfm, value)) return value def make_value_from_datastore(self, value): if value is not None: value = reduce(lambda x,f: f(x), self._itfm, value) return value

Usage is as simple as this:

class MyModel(db.Model): data = SerializableProperty() entity = MyModel(data = {"key": "value"}, key_name="somekey") entity.put() entity = MyModel.get_by_key_name("somekey") print entity.data

Hope that helps!

**Update** (20091126): I’ve changed db.Blob to self.data_type as suggested by Peritus in Comment. The same comment also suggested a JSONSerializableProperty subclass:

import simplejson as json class JSONSerializableProperty(SerializableProperty): data_type = db.Text _tfm = [json.dumps] _itfm = [json.loads]

Thanks Peritus!

]]>

q)gc:{$[x;(0b,/:a),1b,/:reverse a:.z.s x-1;1#()]} q)show gc 4 0000b 0001b 0011b 0010b 0110b 0111b 0101b 0100b 1100b 1101b 1111b 1110b 1010b 1011b 1001b 1000b

It is also possible to construct the above iteratively using the formula :

q).q.xor:{not x=y} q)gc_iter:{(0b vs x) xor (0b vs x div 2)} q)show (-4#gc_iter@) each til 16 0000b 0001b 0011b 0010b 0110b 0111b 0101b 0100b 1100b 1101b 1111b 1110b 1010b 1011b 1001b 1000b

To check that indeed exactly one bit is flipped each time:

q)check:{x[0] (sum@xor)': 1_x} q)check gc 5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

To identify the position of the bit that was flipped:

q)pos:{raze x[0] (where@xor)': 1_x} q)pos gc 5 4 3 4 2 4 3 4 1 4 3 4 2 4 3 4 0 4 3 4 2 4 3 4 1 4 3 4 2 4 3 4

If we think about it, there is no reason why we have to prefix. We could do suffix as well:

q)gc:{$[x;(a,\:0b),(reverse a:.z.s x-1),\:1b;1#()]} q)show gc 4 0000b 1000b 1100b 0100b 0110b 1110b 1010b 0010b 0011b 1011b 1111b 0111b 0101b 1101b 1001b 0001b q)check gc 5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 q)pos gc 5 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0 4 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0

In fact, if we are only interested in the positions that need to be flipped, we can use this instead:

q)gcpos:{$[x;a,n,a:.z.s n:x-1;()]} q)gcpos 5 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0 4 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0

Such a sequence of positions is useful if we are using Gray code to efficiently enumerate the non-zero points spanned by a set of basis vectors:

q)basis:(1100000b;0111001b;0000011b) q){x xor y} scan basis gcpos count basis 1100000b 1011001b 0111001b 0111010b 1011010b 1100011b 0000011b

**Update** (20090927): Once again, Attila has beaten me at q-golf :-) Here is his formulation:

gc:{x{(0b,/:x),1b,/:reverse x}/1#()}

]]>

According to this thread, it happens on Ubuntu 9.10 too and the solution is to compile with -O1 instead of -O3 optimization. Unfortunately it wasn’t obvious (to me, at least) how to make GCC use -O1 specifically for PARI only.

Digging around in SAGE’s build system, I figured it could be done by repacking the PARI spkg with a modified “get_cc” script:

cd sage-4.1.1/spkg/standard tar jxf pari-2.3.3.p1.spkg sed 's/OPTFLAGS=-O3/OPTFLAGS=-O1/g' \ pari-2.3.3.p1/src/config/get_cc > get_cc mv get_cc pari-2.3.3.p1/src/config/get_cc mv pari-2.3.3.p1.spkg pari-2.3.3.p1.spkg.orig tar jcf pari-2.3.3.p1.spkg pari-2.3.3.p1

After that, I was able to compile SAGE using its standard build procedure. Admittedly this is a quick hack. A better solution may be to set OPTFLAGS according to the version of GCC used.

Updates: According to this thread, it is fixed in Ubuntu Karmic.

]]>

[sh@pc ~]$ grep -slrP '\x05\x00\xc0' /boot /boot/grub/ffs_stage1_5 /boot/grub/ufs2_stage1_5 /boot/grub/stage2 /boot/efi/EFI/redhat/grub.efi /boot/vmlinuz-2.6.29.6-213.fc11.x86_64

I couldn’t find this when Googling for “grep binary” so I thought I should pen it down here.

]]>

]]>

]]>