https://dx.doi.org/10.2139/ssrn.3166840

This paper studies the Talmudic rule aka the 1/N rule aka the uniform investment strategy on the microscopic scale, that is on the scale of single transactions. We focus on the simplest case of only two assets and show that the Talmudic rule results in each transaction to increase geometric mean of assets, regardless of price change direction. Then, we answer the following question: given any sequence of prices, how to find its optimal subsequence, maximizing the total growth of the geometric mean of assets? We conclude with an algorithm that can be used to analyze various sequences of prices and help develop trading strategies based on the Talmudic rule.
Early this year, I made a post on LtU about the experimental "abstract" algorithm in MLC. Soon after that, Gabriel Scherer suggested doing exhaustive search through all possible inputs up to a particular size. Recently, I decided to conduct such an experiment. Here are

Some results

I managed to collect some results [1]. First of all, I had to pick a particular definition for "size" of a λ-term, because there are many. I chose the one that is used in A220894 [2]:

size(x) = 0;
size(λx.M) = 1 + size(M);
size(M N) = 1 + size(M) + size(N).

For sizes from 1 to 9, inclusively, there exist 5663121 closed λ-terms. I tested all of them against both "abstract" [3] and "optimal" [4] algorithms in MLC, with up to 250 interactions per term. The process took almost a day of CPU time. Then, I automatically compared them [5] using a simple awk(1) script (also available in [1]), looking for terms for which normal form or number of β-reductions using "abstract" would deviate from "optimal".

No such terms have been found this way. Surprisingly, there have been identified apparent Lambdascope counterexamples instead, the shortest of which is λx.(λy.y y) (λy.x (λz.y)) resulting in a fan that reaches the interaction net interface. I plan to look into this in near future.

As for sizes higher than 9, testing quickly becomes unfeasible. For example, there are 69445532 closed terms of sizes from 1 to 10, inclusively, which takes a lot of time and space just to generate and save them. [6] is a 200MB gzip(1)'ed tarball (4GB unpacked) with all these terms split into 52 files with 1335491 terms each. In my current setting, it is unfeasible to test them.

I may come up with optimizations at some point to make it possible to process terms of sizes up to 10, but 11 and higher look completely hopeless to me.

[1] https://gist.github.com/codedot/3b99edd504678e160999f12cf30da420
[2] http://oeis.org/A220894
[3] https://drive.google.com/open?id=1O2aTULUXuLIl3LArehMtwmoQiIGB62-A
[4] https://drive.google.com/open?id=16W_HSmwlRB6EAW5XxwVb4MqvkEZPf9HN
[5] https://drive.google.com/open?id=1ldxxnbzdxZDk5-9VMDzLvS7BouxwbCfH
[6] https://drive.google.com/open?id=1XjEa-N40wSqmSWnesahnxz6SXVUzzBig
From command line to MLC:

$ npm i -g @alexo/lambda
└── @alexo/lambda@0.3.6

$ node work2mlc.js getwork.json 381353fa | tee test.mlc
Mid = x: x
        hex(24e39e50)
        hex(1efebbc8)
        hex(fb545b91)
        hex(db1ff3ca)
        hex(a66f356d)
        hex(7482c0f3)
        hex(acc0caa8)
        hex(00f10dad);

Data = x: x
        hex(a7f5f990)
        hex(fd270c51)
        hex(378a0e1c);

Nonce = hex(381353fa);

Zero32 (Pop 8 (RunHash Mid Data Nonce))
$ lambda -pem lib.mlc -f test.mlc
3335648(653961), 17837 ms
v1, v2: v1
$ 

https://gist.github.com/codedot/721469173df8dd197ba5bddbe022c487
https://gist.github.com/codedot/721469173df8dd197ba5bddbe022c487

$ npm i -g @alexo/lambda
└── @alexo/lambda@0.3.6

$ make
	shasum -a 256 /dev/null
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855  /dev/null
	lambda -pem lib.mlc 'Pri32 hex(e3b0c442)'
857(230), 19 ms
_1 _1 _1 _0 _0 _0 _1 _1 _1 _0 _1 _1 _0 _0 _0 _0 _1 _1 _0 _0 _0 _1 _0 _0 _0 _1 _0 _0 _0 _0 _1 _0
	lambda -pem lib.mlc 'Pri32 (Shift 8 (Hash1 NullMsg))'
3247721(688463), 17211 ms
_1 _1 _1 _0 _0 _0 _1 _1 _1 _0 _1 _1 _0 _0 _0 _0 _1 _1 _0 _0 _0 _1 _0 _0 _0 _1 _0 _0 _0 _0 _1 _0
	shasum -a 256 </dev/null | xxd -r -p | shasum -a 256
5df6e0e2761359d30a8275058e299fcc0381534545f55cf43e41983f5d4c9456  -
	lambda -pem lib.mlc 'Pri32 hex(5df6e0e2)'
856(230), 15 ms
_0 _1 _0 _1 _1 _1 _0 _1 _1 _1 _1 _1 _0 _1 _1 _0 _1 _1 _1 _0 _0 _0 _0 _0 _1 _1 _1 _0 _0 _0 _1 _0
	lambda -pem lib.mlc 'Pri32 (Shift 8 (Hash2 NullMsg))'
6448027(1373506), 38750 ms
_0 _1 _0 _1 _1 _1 _0 _1 _1 _1 _1 _1 _0 _1 _1 _0 _1 _1 _1 _0 _0 _0 _0 _0 _1 _1 _1 _0 _0 _0 _1 _0
$ 

Command line

  • POSIX (XCU "Shell & Utilities"): vi(1), awk(1), make(1), bc(1), sed(1), grep(1), sort(1), uniq(1), tee(1), wc(1), etc.
  • GNU Screen (useful to echo exec screen -xR >>~/.profile on a remote host)
  • Git: git-grep(1), git-stash(1), git-bisect(1), etc.
  • Ledger (useful for optimizing both finances and time)
  • Taskwarrior (TODO manager, highly recommended)
  • drive (one of CLIs for Google Drive)
  • Jekyll (generates static websites from markdown)

Web

Chrome OS

  • Google Keep (quite convenient for grocery lists)
  • Google Drive (directly accessible in Chrome OS' Files)
  • Secure Shell (the main SSH client for Chrome OS, supports SFTP in Files and SSH bookmarks, type ssh name@example.com in the address field)
  • Wolfram Alpha (type = universe age in planck times in the address field)

Disclaimer: I'm celebrating five years as a Chromebook user.

Here is one way to profile calendars:

  1. Export calendars in iCalendar format.
  2. Check out this Awk script:

    function parse(dt)
    {
    	Y = substr(dt, 1, 4);
    	M = substr(dt, 5, 2);
    	D = substr(dt, 7, 2);
    	h = substr(dt, 10, 2);
    	m = substr(dt, 12, 2);
    	s = substr(dt, 14, 2);
    
    	return Y "/" M "/" D " " h ":" m ":" s;
    }
    
    /^BEGIN:VEVENT/ {
    	dtstart = "";
    	dtend = "";
    	summary = "";
    }
    
    /^DTSTART:/ {
    	sub(/\r$/, "");
    	sub(/^DTSTART:/, "");
    	dtstart = parse($0);
    }
    
    /^DTEND:/ {
    	sub(/\r$/, "");
    	sub(/^DTEND:/, "");
    	dtend = parse($0);
    }
    
    /^SUMMARY:/ {
    	sub(/\r$/, "");
    	sub(/^SUMMARY:/, "");
    	gsub(/  */, " ");
    	summary = $0;
    }
    
    /^END:VEVENT/ {
    	if (dtstart && dtend && summary) {
    		print "i " dtstart " " prefix summary;
    		print "o " dtend;
    	}
    }
    

  3. Have the Ledger utility installed:
    sudo apt install ledger # or whatever
  4. Convert the exported ICS files to timelog format:
    awk -f ics2tc.awk *.ics >timelog.tc
  5. Generate various reports from timelog, for example:
    ledger -f timelog.tc b -S -T
  6. Optionally specify a prefix:
    awk -f ics2tc.awk -v prefix=Work: Work.ics >Work.tc
  7. Or even create a Makefile like this:

    TIMELOGS = Anna.tc David.tc
    
    all: $(TIMELOGS)
    
    clean:
    	-rm -f $(TIMELOGS)
    
    .SUFFIXES: .ics .tc
    
    .ics.tc:
    	awk -f ics2tc.awk -v prefix=$*: $< >$@
    

  8. ?????
  9. PROFIT!!1oneone
I am currently working on implementing needed reduction for interaction nets. To do that, I first needed to refactor a lot of somewhat ugly fast-written code in inet-lib. At some point, I changed retrieving an element from an array to .pop() from .shift(), just because in JavaScript the former happens to be a cheaper operation than the latter.

Many commits later, I decided to play with the program a little bit and compare performance between .shift()ing and .pop()ing. Boom! The program appeared to be broken. Even worse, invariance of the queue that is represented by that array with respect to the order in which it is processed is the whole point of interaction nets, namely the property of strong confluence also known as the one-step diamond property. I thought I fucked up hard.

First, I took a look at git-blame(1) for the line of code that calls .pop(), and found the corresponding commit. Then, I marked its parent commit as good with git-bisect(1). After a few steps, git-bisect(1) found the first bad commit.

Evidently, the problem had something to do with indirection applied by non-deterministic extension of interaction nets. And it did not take more than a couple of minutes to figure out a simple one-liner fix.

Overall, it took less than half an hour from finding a bug to fixing it which I first thought would take hours if not days. To me, it looks like yet another evidence that the idea of git-bisect(1) is totally genius. So, thanks again, Linus!

P. S. Free advice: when making commits, it is always useful to keep in mind 1) a possible need to git-grep(1) some lines of code later, and 2) almost inevitable need to deal with bugs which is a lot easier when commits are suitable for git-bisect(1).


Один из вариантов того, как можно читать классическую монографию по λ-исчислению [1]:

параграф 2.1;
упр. 2.4.1 (i)-(iii), 2.4.2-2.4.13;
упр. 2.4.15 (только в оригинале [2]);
параграф 2.2;
упр. 2.4.14;

параграфы 3.1-3.3;
упр. 3.5.1 (v), 3.5.1 (i), 3.5.6 (i), 3.5.2, 3.5.3, 3.5.11;
параграфы 13.1-13.2 до приложения 13.2.3 включительно;

часть II (главы 6-10);

параграф 4.1;
упр. 4.3.2, 4.3.4;
главы 15 и 16.

В каком-то приближении именно этот материал изложен чрезвычайно кратко в [3] (по-русски).

[1] Х. Барендрегт. Ламбда-исчисление, его синтаксис и семантика. Москва, 1985.
[2] H. P. Barendregt. The Lambda Calculus, Its Syntax and Semantics. North-Holland, 1984.
[3] A. Salikhmetov. Lambda Calculus Synopsis. arXiv:1304.0558, 2013.
$ cat >c.c
#include <stdio.h>

int main()
{
        fprintf(stdout, "stdout\n");
        fprintf(stderr, "stderr\n");
        return 0;
}
$ cc c.c
$ 3>&2 2>&1 1>&3 ./a.out | tee log
stdout
stderr
$ cat log
stderr
$
http://tinyurl.com/q7yhgjq

$ make
iverilog -Wall -DSIM -o comb comb.v
./comb
0381353f9 0 b2e09fd28ea2916f526a8dbb3a92235f0ddb9b0b1ccd0e7d9b5786f91b62031e
0381353fa 1 00000000627d0f02061ce63584c20662272c527d21f17dfaffb20d7de340423d
0381353fb 0 c90dd726ebe7c2770808fe574e85aba7e90ba2aea8998c70bcb24781d4010955
$ 

Hidden SSH

May. 3rd, 2013 08:36 pm
root@debian:~# apt-get install tor
Reading package lists... Done
Building dependency tree       
Reading state information... Done
[...]
root@debian:~# grep '^Hidden' /etc/tor/torrc
HiddenServiceDir /var/lib/tor/honey/
HiddenServicePort 22 127.0.0.1:22
root@debian:~# /etc/init.d/tor restart
Stopping tor daemon: tor.
Raising maximum number of filedescriptors (ulimit -n) to 8192.
Starting tor daemon: tor...
[...]
May 03 20:08:30.201 [notice] Opening Socks listener on 127.0.0.1:9050
done.
root@debian:~# torsocks ssh `cat /var/lib/tor/honey/hostname`
root@ohdzjoric6qwtr2c.onion's password: 
Linux debian 2.6.32-5-amd64 #1 SMP Mon Feb 25 00:26:11 UTC 2013 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri May  3 20:10:22 2013 from localhost
root@debian:~# dd if=/dev/urandom of=data count=1k
1024+0 records in
1024+0 records out
524288 bytes (524 kB) copied, 0.081219 s, 6.5 MB/s
root@debian:~# md5sum data
81d52fca4cfe42dca24659850139672a  data
root@debian:~# logout # latency about 1-2 sec.
Connection to ohdzjoric6qwtr2c.onion closed.
root@debian:~# torsocks scp `cat /var/lib/tor/honey/hostname`:data copy
root@ohdzjoric6qwtr2c.onion's password: 
data                                          100%  512KB  28.4KB/s   00:18    
root@debian:~# md5sum copy
81d52fca4cfe42dca24659850139672a  copy
root@debian:~# 
$ cat hex 
02000000
f40457b1
d005aeec
6e4fe577
f2a76dc3
73ef8901
220376b7
2b6de55a
00000000
bdcb52b0
9c48d2ee
26a513cb
cead7fb7
7daf4cef
9e2ab83b
bb9eb6b0
a7f5f990
fd270c51
378a0e1c
381353fa
$ xxd -r -p hex data
$ shasum -a 256 data >hash1
$ xxd -r -p hash1 bin
$ shasum -a 256 bin
e340423dffb20d7d21f17dfa272c527d84c20662061ce635627d0f0200000000  bin
$
This is how you perform word expansions using wordexp() system interface:

alexo@kuha:~/wordexp$ make
        cc    -o wordexp wordexp.c
        ./wordexp
echo hello world
0: 'echo'
1: 'hello'
2: 'world'
"\$HOME = $HOME" '$PWD'\ =\ $PWD
0: '$HOME = /home/alexo'
1: '$PWD = /home/alexo/wordexp'
alexo@kuha:~/wordexp$

https://gist.github.com/4492253

Source )
This script was written on Solaris 11 running inside VirtualBox.

set -Cex

ROOT=/opt/euromake
DEFROUTER=10.0.2.2
NETWORK=17
MIN=10
MAX=99

cd $ROOT

mkdir -p vacant
for p in $(ls vacant); do
	test -d "vacant/$p"
	if echo $$ >"vacant/$p/lock"; then
		POOL=$p
		trap 'rm -f "vacant/$POOL/lock"' INT TERM EXIT
		break
	fi
done

test "$POOL" -ge "$MIN"
test "$POOL" -le "$MAX"

read ZONE <vacant/$POOL/next
test "$ZONE" -ge "$MIN"
test "$ZONE" -le "$MAX"

ZONENAME=$NETWORK.$POOL.$ZONE
ZONEPATH=/zones/clones/$NETWORK/$POOL/$ZONE

# Comment at your own risk!
echo \
zonecfg -z $ZONENAME <<-COMMIT
	create
	set zonepath=$ZONEPATH
	select anet linkname=net0
		set defrouter=$DEFROUTER
		set allowed-address=10.$ZONENAME
	COMMIT
echo \
zoneadm -z $ZONENAME clone origin # from /zones/origin

NEXT=$(($ZONE + 1))
if [ "$NEXT" -le "$MAX" ]; then
	echo $NEXT >|vacant/$POOL/next
	rm -f vacant/$POOL/lock
else
	mkdir -p full
	test ! -e "full/$POOL"
	mv vacant/$POOL full
fi

trap - INT TERM EXIT

Debugging )
У вас все еще непозикс? Тогда мы идем к вам!

Этот ваш гнушный readline - жалкое беспомощное существо по сравнению с vi Line Editing Command Mode в POSIX Shell.

Добавьте команду set -o vi в ~/.profile или ~/.*shrc (по обстоятельствам), если, как и мне, лень ее набирать вручную каждый раз, нажимайте Escape, и будет вам счастье: v, h, j, k, l, /, N, n, c, I, A, b, w, 0, $ и еще много вкусных слов. И мерзкий Tab ваш тоже запретим: есть =, \ и даже *. В общем, пользуйтесь на здоровье!

В Solaris 8 эта часть стандарта реализована в интерпретаторе Korn Shell.
dmitris-imac:~ dmvo$ telnet austin.local
Trying 192.168.1.2...
Connected to austin.
Escape character is '^]'.


SunOS 5.8

login: dmvo
Password: 
Last login: Mon Dec 10 20:07:36 from mac
Sun Microsystems Inc.   SunOS 5.8       Generic Patch   February 2004
$ uname -a
SunOS austin 5.8 Generic_108528-29 sun4u sparc SUNW,Ultra-5_10
$ df -kP .
Filesystem           1024-blocks        Used   Available Capacity  Mounted on
mac:/Users/dmvo/Desktop
                      1952363672   273263664  1678844008    14%    /home/dmvo
$ echo $PROJECTDIR
alexo
$ ls -al /export/home/alexo/src
total 6
drwxrwxr-x   3 alexo    staff        512 Dec 10 21:17 .
drwxr-xr-x  12 alexo    staff        512 Dec 10 18:40 ..
drwxrwxr-x   2 alexo    staff        512 Dec 10 20:50 SCCS
$ ls
$ ed
a 
# %W%

all: rip
        ./rip

clean:
        -rm -f rip
.
w Makefile
43
q
$ sccs create Makefile

Makefile:
1.1
7 lines
$ rm ,Makefile
$ ed
a
#include <stdio.h>

static const char version[] = "%W%";

int main()
{
        printf("R. I. P.\n");
        return 0;
}
.
w rip.c
121
q
$ sccs create rip.c

rip.c:
1.1
9 lines
$ rm ,rip.c
$ ls # The moment of truth!
$ make
sccs  get -s Makefile -GMakefile
sccs  get -s rip.c -Grip.c
cc    -o rip rip.c 
./rip
R. I. P.
$ what rip
rip:
        rip.c   1.1
        stdio.h 1.78    99/12/08 SMI
        stdio_iso.h     1.2     99/10/25 SMI
        feature_tests.h 1.18    99/07/26 SMI
        isa_defs.h      1.20    99/05/04 SMI
        va_list.h       1.12    99/05/04 SMI
        stdio_tag.h     1.3     98/04/20 SMI
        stdio_impl.h    1.8     99/06/10 SMI
$ which cc
/opt/SUNWspro/bin/cc
$ date # Guess why?
Mon Dec 10 20:51:35 EET 1984
$ exit
Connection closed by foreign host.
dmitris-imac:~ dmvo$ 
  • Английский язык.
  • Десятипальцевая печать.
    • Аппликатура.
    • Раскладка клавиатуры для левой руки и модификаторы справа.
    • Раскладка клавиатуры для правой руки и модификаторы слева.
    • Caps-lock.
  • E-mail.
    • Оформление писем.
      • Plain text.
      • Обращение, приветствие, введение.
      • Соответствующий языковой стиль текста.
      • Заключение и подпись.
      • Постскриптум и краткая подпись.
    • Carbon-copy и blind-carbon-copy.
    • Reply и reply-to-all.
    • Forwarding.
  • Базовые навыки работы с операционной системой
  • Элементы предметной области.
    • LaTeX.
    • Web.
    • Portable System Interface.
  • Практика программирования.
#include <inttypes.h>
#include <stdint.h>
#include <stdio.h>

#define VOIDPTRF "(void *)0x%" PRIxPTR

main()
{
	printf("\t" VOIDPTRF "\n", (uintptr_t)main);
	return 0;
}

alexo@euromake:/tmp/voidptrf$ make
cc     voidptrf.c   -o voidptrf
./voidptrf
        (void *)0x80483c4
alexo@euromake:/tmp/voidptrf$  
This is how you convert C source code (or any other ASCII text) into strings in C using Shell.

make(1) rules:

all: verbatim.h

verbatim.h: txt2cs.sed def.h sim.c
	printf '#define %s \\\n%s\n\n' >$*.tmp \
		INDEF "$$(sed -f txt2cs.sed def.h)"
	printf '#define %s \\\n%s\n' >>$*.tmp \
		INSIM "$$(sed -f txt2cs.sed sim.c)"
	mv $*.tmp $@

clean:
	-rm -fr verbatim.h *.tmp

sed(1) script:

s/\\/\\\\/g
s/"/\\"/g
s/	/\\t/g

$!s/^\(.*\)$/	"\1\\n" \\/
$s/^\(.*\)$/	"\1\\n"/

Output )

I should have learnt this before.

Let us suppose you have just made a pull request with about twenty commits: something borrowed, something blue, and something from Linus. Say, few commits have typos in their commit logs, few should have been merged into a single one, some should have been removed, a couple of them are so huge that you should split them into several ones, and except, maybe, one lucky change, the rest is to be completely rewritten.

In GitHub, it is however not common practice to deal with pull requests that way. After a pull request has been reviewed, it usually happens to be appended with nearly as many commits as in the original pull request. When the pull request has finally been merged, this in turn makes the master branch contain almost nothing but garbage.

Reviewing patch series and making the developer redo all the stuff from the very beginning, probably several times, might look as inapplicable approach to deal with pull requests. Besides, it does not guarantee the master branch not to contain any garbage. Of course, it is not a silver bullet. Nevertheless, it does help avoid more than 90% of junk, keeping the master branch log much more clear. That appears to be really helpful when you deal with a bug using git-bisect(1).

There exists at least the following scheme which makes GitHub-style pull requests do the trick, quite close to reviewing patch series in LKML.

  1. Make a new pull request from bug2645 to master.
  2. Discuss the changes and how to improve it until it is clear what to do for the next iteration.
  3. Close the pull request in order to save the resulting review.
  4. Fork a backup with git branch bug2645-backup bug2645 just in case.
  5. Play with git rebase -i master (edit and squash), git reset HEAD^ (splitting commits), git add -p wtf.c (s and e), and git stash -k (test results before committing) to address the comments from the review.
  6. When you are done, type git push -f origin bug2645 and start from the very beginning.

This scheme has been tested on an artificial task simulating huge and ugly patch series. Specifically, we cleared the master branch, and pretended that its backup is the development branch far away ahead of master. Then, we agreed on the rules to write commit logs in a different manner than it was before. Namely, all commit logs should have the form of 2645: update time stamps in msync(), where 2645 is the number of an issue on GitHub which corresponds to the applied changes. This way, one can always track which exactly bug implied each particular commit.

So, give it a try!

Page generated Aug. 8th, 2025 11:03 pm
Powered by Dreamwidth Studios