-->

Saturday, December 31, 2016

Most Visited Websites

In the past I came across a post that allowed you to see your most used bash commands like here. Learning this was very enlightening, and today I had a thought in the exact same vein:

What are my most visited websites?

The answer took longer to get at than the bash version (since Firefox stores history in a sqlite file), but the process was exactly the same. Get your history, isolate the domain part of the url, count the number of occurrences for each domain, and finally sort. I ended up using python 2 for the entire process

History


Firefox stores history, bookmarks, and download history in one sqlite file called places.sqlite. This file is located in AppData\Roaming\Mozilla\Firefox\Profiles\*hash*.default.

I had trouble at this step because I had never used sql before. Thankfully python makes it very easy. Using the sqlite3 library I opened a connection and then a cursor. The file has a lot of tables in it, but the only one I ended up needing was moz_places. The urls are under the column url. To get all the urls run the following code:

urls = []
cursor.execute("SELECT url FROM moz_places;")
for url in cursor.fetchall():
urls.append(url[0])


The expression url[0] is there to remove it from a singular tuple.

Getting the Domain


My gut instinct was to use a regex to grab the domain from the full url; however, I found out python includes the urlparse library.

from urlparse import urlparse
domains = []
for url in urls:
domains.append(urlparse(url).netloc)


Counter


From here I was tempted to print the domains to stdout and just reuse the solution for the most used bash commands (*output* | sort | uniq -c | sort -nr). I didn't though and instead elected to finish the process entirely in python. To emulate the uniq command I simply used pythons Counter class.

from collections import Counter
domain_frequencies = Counter(domains)


Sort


I spent a longer time on this step than I care to admit. I almost wrote a sorting method from scratch which would have been painful. Thankfully the Counter class comes with a builtin method to sort the keys.

most_visited = domain_frequencies.most_common()
for idx in range(0,20):
print idx, "-", most_visited.keys()[idx]


Summary


That's all it took! I didn't expect the solution to take so little effort. What started as intellectual curiosity turned into a fun programming experience. By the way here are my top 20 most visited domians:

1 - i.imgur.com
2 - www.google.com
3 - imgur.com
4 - www.youtube.com
5 - www.reddit.com
6 - narioko.tk
7 - gfycat.com
8 - i.redd.it
9 - www.netflix.com
10 - en.wikipedia.org
11 - i.reddituploads.com
12 - twitter.com
13 - i.4cdn.org
14 - cdn.awwni.me
15 - www.mangastream.to
16 - www.minecraftforum.net
17 - out.reddit.com
18 - github.com
19 - www.curse.com
20 - stackoverflow.com


I'm not surprised at the image domains ranking so high. Your usual suspects also fill the list like google, youtube, reddit, netflix, etc... I'm somewhat surprised by minecraftforum and curse being this high on the list. I suppose they rank up there because of all the effort I put into finding mods for Minecraft. A more refined url parsing would return better results. I neglected to remove www from the urls which split up some domains. Other domains are split across multiple subdomains. Tumblr is not ranked high on my list because their domains show up as idiots-blackboard.tumblr.com. Of course I couldn't include all the domains I've visited. Much of my browsing is shared with my laptop and my phone. Not to mention any private browsing :)

Wednesday, June 22, 2016

My Home Directory

I just want to make a light post today. Let's take a look at my home directory.

% tree -L 1


.
├── bak
├── bin
├── dev
├── doc
├── etc
├── junk
├── media
├── public
└── repos


It's important to notify Linux that you've changed these directories from the defaults. Then it won't try to create the old directories over and over. Simply edit user-dirs.dirs in the config folder to point to your preferred locations. Now even Nautilus will be able to make shortcuts to your directories in the "Places" sidepane.

% vim .config/user-dirs.dirs


XDG_DESKTOP_DIR="\$HOME/doc/desktop"
XDG_DOWNLOAD_DIR="\$HOME/doc/downloads"
XDG_TEMPLATES_DIR="\$HOME/.templates"
XDG_PUBLICSHARE_DIR="\$HOME/public"
XDG_DOCUMENTS_DIR="\$HOME/doc"
XDG_MUSIC_DIR="\$HOME/media/music"
XDG_PICTURES_DIR="\$HOME/media/pics"
XDG_VIDEOS_DIR="\$HOME/media/vid"


And don't forget to run this

% xdg-user-dirs-update

bak - backups directory

This directory holds assorted files as backups. I like having these all in one spot.

bin - binary files and scripts

All the scripts I want to use system wide go in this directory. These are almost all single-file utility scripts here. If I have a larger program it usually goes in ~/dev. The Oscar for best supporting actress goes to this line in my .zshrc file:

export PATH=/home/arno/bin:$PATH

dev - development directory

This folder holds all of my personal projects. When I want to work on a new coding project I'll never finish I make a subdirectory for it here. Simple projects can just be a script with some testing files thrown in with it. More serious projects get a larger structure similar to this:

% tree -L 1

.
├── bak
├── bin
├── doc
├── include
├── makefile
├── obj
├── README
├── src
└── tests


doc - documents folder

Quite simply my documents folder. It holds the brunt of my files. I like to keep a directory for school here. I have further directories dedicated to each class in the school folder. For example the course Mechanical Engineering 366 becomes ME 366. I keep a work directory here dedicated to resumes, cover letters, and the odd timesheet. I have a pdfs directory where I keep downloaded textbooks/articles. My Dropbox folder also goes here.

etc - environment and tool configuration

The ~/etc directory contains symlinks to my most visited config files. I never liked how different programs used different config locations. They always fail to use a consistent naming scheme too (I'm looking at you .tmux.conf). By symlinking through ~/etc I've made a consistent place for all those files. I need only type

% vim ~/etc/xbk

to really get

% vim ~/.xbindkeysrc.scm

I can also use this directory for backups if I wanted. I currently don't however as git can easily take care of the regular files. The rest of etc:

% tree etc

etc
├── bash_profile -> /home/arno/.bash_profile
├── bashrc -> /home/arno/.bashrc
├── compton -> /home/arno/.config/compton.conf
├── dzen2 -> /home/arno/.config/herbstluftwm/panel.sh
├── emacs -> /home/arno/.emacs
├── functions -> /home/arno/.functions
├── hlwm -> /home/arno/.config/herbstluftwm/autostart
├── i3wm -> /home/arno/.i3/config
├── sbclrc -> /home/arno/.sbclrc
├── ssh -> /home/arno/.ssh/config
├── tmux -> /home/arno/.tmux.conf
├── vimrc -> /home/arno/.vimrc
├── xbk -> /home/arno/.xbindkeysrc.scm
├── xinitrc -> /home/arno/.xinitrc
├── zprofile -> /home/arno/.zprofile
└── zshrc -> /home/arno/.zshrc


junk - exactly what it says on the tin

Sometimes I just need to make a quick scratch file for testing. By keeping them in one place they don't end up scattered all over my system. There's not any reason to keep them past their testing phase, but I sometimes go back and reference them. I've used other names for this directory in the past such as ~/scratch and ~/tmp.

media - media directory

Collection of all my music, pics, gifs/webms, games, videos, etc... I'm rather meticulous about sorting these files by subtype.

% tree -L 1 media

media
├── games
├── music
├── pics
└── vid


public - public sharing folder

Everything that goes in here is available to any other computer on the network. I need to share files between my laptop and desktop sometimes. The share folder is the easiest way to accomplish this.

repos - repositories

This is where I put other people's projects. If I download the source for a program I store it here. This directory is my goto for git clone commands. Usually these programs place themselves in /usr/bin. If not I manually put them in ~/bin.

Sunday, June 19, 2016

Maxwell's Relations

I recently found out that thermodynamics can be recast in terms of exterior calculus. One of the many things made easier is Maxwell's relations. Anyone who's taken thermodynamics has at some point run into Maxwell's relations. They're usually derived using the equality of mixed partial derivatives and Legendre transforms. Some people instead come up with all sorts of funny ways to remember them like this one.
Try explaining how those diagrams work. Go ahead I'll wait.

I'm assuming you know at least a little bit of exterior calculus. The only rules we really need are some of the rules for the exterior derivative and the wedge product:

\begin{equation}
d^2 f = 0
\end{equation}

\begin{equation}
d( f dg ) = df \wedge dg
\end{equation}

\begin{equation}
a \wedge b = - b \wedge a
\end{equation}

\begin{equation}
a \wedge a = 0
\end{equation}

Firstly we'll look at a basic one-form

\begin{equation}
df = \left(\frac{\partial f}{\partial x}\right)_y dx + \left(\frac{\partial f}{\partial y}\right)_x dy
\end{equation}

Now take the wedge product of both sides with dx (with the first term on the right cancelling)

\begin{equation}
df \wedge dx = \left(\frac{\partial f}{\partial y}\right)_x dy \wedge dx
\end{equation}

rearranging to get the ratio of wedge products

\begin{equation}
\frac{df \wedge dx}{dy \wedge dx} = \left(\frac{\partial f}{\partial y}\right)_x
\label{surform}
\end{equation}

This rule needs a name. I tried searching for it but couldn't find anything. Surface form? Surface derivative? Whatever time for the real show to start. Let's begin with the first law of thermodynamics. Then we'll take the exterior derivative of both sides.

\begin{equation}
dU = T dS - p dV
\end{equation}

\begin{equation}
0 = d^2 U = dT \wedge dS - dp \wedge dV
\end{equation}

\begin{equation}
dT \wedge dS = dp \wedge dV
\end{equation}

Divide both sides by the quantity $ dS \wedge dV $ and substitute for the correct partial derivatives using \eqref{surform} to get

\begin{equation}
-\left(\frac{\partial T}{\partial V}\right)_S = \left(\frac{\partial p}{\partial S}\right)_V
\end{equation}

where the minus sign came from flipping the wedge product. To get the other three relations we need only divide by one differential from each side $ dS \wedge dp $, $ dT \wedge dV $, and $ dT \wedge dp $. We easily get the four Maxwell relations for $ (p, V, T, S) $. To get any others just start with the first law with any new variables in it (like $ (\mu,N) $ or $ (M,B) $).

The reciprocity theorem is also easy to see

\begin{equation}
\frac{dy \wedge dx}{dz \wedge dx} = \frac{1}{\frac{dz \wedge dx}{dy \wedge dx}} \implies
\left(\frac{\partial y}{\partial z}\right)_x = \frac{1}{\left(\frac{\partial z}{\partial y}\right)_x}
\end{equation}

as well as the cyclic rule

\begin{equation}
\left(\frac{\partial x}{\partial y}\right)_z \left(\frac{\partial y}{\partial z}\right)_x \left(\frac{\partial z}{\partial x}\right)_y = -1
\end{equation}

since we have to switch three of the wedge products to get the numerators and denominators to cancel - picking up three negative signs along the way. There are more applications of exterior calculus to thermodynamics, but I have to learn them first before I can write about them :P

Tuesday, June 14, 2016

Lagrangians, Coordinates, Action!

Most people learn about Newton's 2nd law back in grade school. It isn't until we've learned multivariate calculus before we tackle alternatives to the Newtonian framework like Lagrangian mechanics. I want to look at deriving the Euler-Lagrange equations. I also want to motivate the form of the Lagrangian. Then at the end I'll work some examples before finally talking about why Lagrangians are important.






Action Principle

We utiize the language of math to describe physical principles. In our first physics classes we learn simple relations such as

$$ v = d/t $$

This statement says that the velocity of something is the distance traveled divided by the time it took. As we progress in physics we start using derivatives to express our ideas. Instead we use something like

$$ v = \frac{dx}{dt} $$

This says the velocity is the time derivative of position. We write almost all laws of nature as differential equations. Perhaps most famously Newton's 2nd law is a differential equation.

$$ F = m\frac{d^2 x}{dt^2} $$

or alternatively

$$ F = \frac{dp}{dt} $$



Unsurprisingly, simple functions and differential equations are not the only mathematical ways to describe physics (That's not to say we won't be using differential equations at all). Instead we will look at functionals. Historically one of the first such problems was the brachistochrone problem. The problems asks "What curve between two points would allow a frictionless bead to travel in the shortest amount of time?". The problem can be cast as a functional by asking you to find the minimum of

$$ t_{12} = \int_{P_1}^{P_2}\frac{ds}{v} $$

where s is the arc length and v is the velocity. Another problem is finding the surface with minimum area that fits inside a curve. This is the shape a bubble will take if a wireframe with curve's shape is dipped in a soapy solution. The law for minimal area can be stated as finding the minimum of the functional

$$ A[u] = \int\int_{\Omega}\sqrt{1+{(\frac{\partial u}{\partial x})}^2+{(\frac{\partial u}{\partial x})}^2} dx dy $$

for some function $z = u(x,y)$. Fermat's principle in optics says that light travels along the path that locally minimizes the optical length between its endpoints. In functional form we are asked to minimize

$$ L[f] = \int_{x_0}^{x_1} n(x, f(x))\sqrt{1+f'(x)} dx $$

for some function $y = f(x)$



The question (if you couldn't tell by the lead up already) is can we cast mechanics as the minimization (or maximization or some extremum) of some functional? The answer of course is a resounding yes. We say that the motion that a particle takes is the one that minimizes the action functional (denoted by S)

$$ S[q(t)] = \int_{t_0}^{t_1}L(q(t),\dot{q}(t),t) dt $$

The particle can take an infinite number of paths between two points, but nature makes it take the one that makes the action S a minimum. The "L" inside the integral is called the Lagrangian after Joseph-Louis Lagrange who first studied this approach to classical mechanics. We shall see later that the Lagrangian can be written in the form $L = T - V$ with T being the kinetic energy and V the potential. This may seem strange at first sight, but I'm going to show how to derive the Euler-Lagrange equations before showing you why the Lagrangian takes this form.



The Euler-Lagrange Equations

We will now investigate how to do calculus with functionals. When we first work with derivatives we move the coordinates by a little bit (dx) then ask how the function changes (df) $df = f(x+dx) - f(x)$ For functionals we carry out the exact same process. The action S is a function of the path which the particle takes. So we perturb the path $q(t)$ by the amount $\delta q(t)$. Then we look at the change in S.

$$ \delta S = \int_{t_0}^{t_1}[L(q(t)+\delta q(t),\dot{q}(t)+\delta \dot{q}(t),t)-L(q(t),\dot{q}(t),t)] dt $$

I'll skip the boring stuff around the exact mathematical definition of these variational derivatives and just say that the variation of the functional is equal to the integral of the sum of the partial derivatives of the Lagrangian (L) times their respective variation (similar to the definition of the total derivative).

$$ \delta S = \int_{t_0}^{t_1}[\frac{\partial L}{\partial q}\delta q(t) + \frac{\partial L}{\partial \dot{q}}\delta \dot{q}(t)] dt $$

For now we will ignore the variation of the time coordinate. We now need to establish a few important facts. For one the variations disappear on the boundary i.e.

$$ \delta q(t_0) = \delta q(t_1) = 0 $$

Another important fact is that the variation and time derivatives are interchangeable. In other words

$$ \delta \dot{q}(t) = \frac{d}{dt}\delta q(t) $$

If we focus on the second term

$$ I = \int_{t_0}^{t_1}\frac{\partial L}{\partial \dot{q}}\delta \dot{q}(t) dt \\
= \int_{t_0}^{t_1}\frac{\partial L}{\partial \dot{q}}\frac{d}{dt}\delta q(t) dt \\
= [\frac{\partial L}{\partial \dot{q}}\delta q(t)]|_{t_0}^{t_1} - \int_{t_0}^{t_1}\frac{d}{dt}(\frac{\partial L}{\partial \dot{q}})\delta q(t) dt $$

by integration by parts. We already said that the variations ($\delta q$) disappear on the boundary. So now the variation of the action is

$$ \delta S = \int_{t_0}^{t_1}\frac{\partial L}{\partial q}\delta q(t) - \frac{d}{dt}(\frac{\partial L}{\partial \dot{q}})\delta q(t) dt \\
= \int_{t_0}^{t_1}[\frac{\partial L}{\partial q} - \frac{d}{dt}(\frac{\partial L}{\partial \dot{q}})]\delta q(t) dt $$

Now to find the extrema we set $\delta S = 0$. The variation $\delta q$ is an arbitrary function, so we can say that the quantity in the brackets must be zero.

$$ \frac{\partial L}{\partial q} - \frac{d}{dt}(\frac{\partial L}{\partial \dot{q}}) = 0 $$

If we allow the Lagrangian to be a function of multiple coordinates and their first derivatives then it is easy to see that we get one equation for each coordinate like so

$$ \frac{\partial L}{\partial q^i} - \frac{d}{dt}(\frac{\partial L}{\partial \dot{q}^i}) = 0 $$

These are the celebrated Euler-Lagrange equations of motion.



The Form of the Lagrangian

Let us instead write the Euler-Lagrange equations (henceforth ELE) in the following form

$$ \frac{\partial L}{\partial q^i} = \frac{d}{dt}(\frac{\partial L}{\partial \dot{q}^i}) $$

which I claim has the same form as Newton's 2nd law

$$ F_i = \frac{dp_i}{dt} $$

if we identify $F_i = \frac{\partial L}{\partial q^i}$ and $p_i = \frac{\partial L}{\partial \dot{q}^i}$. Now consider the fact that the Lagrangian is a function of the positions and the velocities only $L = L(x,v)$. Therefore the total differential of the Lagrangian is

$$ dL = \frac{\partial L}{\partial v}dv + \frac{\partial L}{\partial x}dx \\
= pdv + Fdx $$

$$ \int dL = \int p dv + \int F dx $$

In the simplest case $p = mv$ so the first term on the right becomes $\frac{1}{2}mv^2$. For a conservative force we know it is the negative derivative of the potential. Now $\int F dx = \int -\frac{\partial V}{\partial x} dx = -V$. To sum everything up the Lagrangian in the simplest case is

$$ L(x,v) = T(v) - V(x) $$

We now define the Lagrangian in all coordinates to be $L = T - V$.



Examples

What have we just derived? First we stated that the laws of mechanics can be derived from an action principle. We wrote down a general form of an action that depends on the coordinates, their first derivatives, and time. Next we derived what the laws of mechanics would look like if derived from a Lagrangian. By observing the form of the ELE we saw that we could get back Newton's 2nd law if we wrote down the Lagrangian as difference between the kinetic and potential energy. Let's see this in action.

1. Free Particle

$$ L = \frac{1}{2}m(\dot{x}^2 + \dot{y}^2 + \dot{z}^2) $$

$$ 0 = \frac{d}{dt}(\frac{\partial L}{\partial \dot{x}}) = \frac{d}{dt}(m\dot{x}) ...$$

$$ \ddot{x} = \ddot{y} = \ddot{z} = 0 $$

2. 1D Particle with Potential

$$ L = \frac{1}{2}m\dot{x}^2 - V(x) $$

$$ \frac{\partial L}{\partial x} = -\frac{\partial V}{\partial x} = m\ddot{x} = \frac{d}{dt}(\frac{\partial L}{\partial \dot{x}}) $$

as was expected.

3. Particle in Polar Coordinates

$$ L = \frac{1}{2}m(\dot{r}^2 + r^2{\dot{\theta}}^2) - V(r) $$

In the angular coordinate:

$$ \frac{d}{dt}(mr^2\dot{\theta}) = 0 $$

$$ p_\theta := mr^2\dot{\theta} = constant $$

In the radial coordinate:

$$ m\ddot{r} = -\frac{\partial V}{\partial r} + m{\dot{\theta}}^2r \\
= -\frac{\partial V}{\partial r} + \frac{mh^2}{r^3} $$

4. Particle in Rotating System

Define the z-coordinate to not change. The new x and y axes become

$$ \hat{x}' = cos(\Omega t)\hat{x} + sin(\Omega t)\hat{y} \\
\hat{y}' = -sin(\Omega t)\hat{x} + cos(\Omega t)\hat{y} $$

$$ L = \frac{1}{2}m[{(\dot{x}' - \Omega y')}^2 + {(\dot{y}' + \Omega x')}^2 + \dot{z}^2] - V(x',y',z) $$

$$ m\ddot{x}' = -\frac{\partial V}{\partial x'} + m\Omega^2x' + 2m\Omega\dot{y}' \\
m\ddot{y}' = -\frac{\partial V}{\partial y'} + m\Omega^2y' - 2m\Omega\dot{x}' \\
m\ddot{z} = -\frac{\partial V}{\partial z} $$

The second term in the first two coordinates represents the centripetal force. The third term is the Coriolis force. (Derive that from Newton's laws!)


Advantages

Lagrangian mechanics doesn't introduce any new physics. Instead it gives us a simple way to write down the equations of motion for any system with conservative forces. This is often quick and nearly mindless really. Side note: the Lagrangian approach can also include non-conservative forces as well as constraint forces. I will talk about these in another post. One of the advantages from the Lagrangian view is that the equations of motion are invariant under changes of coordinates. If one coordinate system uses $q^1,...,q^n$ while another uses $Q^1,...Q^n$ we can rewrite the ELE

$$ 0 = \frac{\partial L}{\partial q^i} - \frac{d}{dt}(\frac{\partial L}{\partial \dot{q}^i}) \\
= \sum_{j = 1}^{n} [ \frac{\partial L}{\partial Q^j}\frac{\partial Q^j}{\partial q^i}
+ \frac{\partial L}{\partial \dot{Q}^j}\frac{\partial \dot{Q}^j}{\partial q^i}
- \frac{d}{dt}(\frac{\partial L}{\partial \dot{Q}^j}\frac{\partial \dot{Q}^j}{\partial \dot{q}^i}) ] $$

From the relation $Q^j = Q^j(q^1,...,q^n,t)$ we can derive $\dot{Q}^j = \sum_{i = 1}^{n} \frac{\partial Q^j}{\partial q^i}\dot{q}^i + \frac{\partial Q^j}{\partial t}$. We can then see that $\frac{\partial \dot{Q}^j}{\partial \dot{q}^i} = \frac{\partial Q^j}{\partial q^i}$. Next

$$ \frac{\partial \dot{Q}^j}{\partial q^i} = \sum_{k = 1}^{n} \frac{\partial^2 Q^j}{\partial q^i \partial q^k}\dot{q}^k + \frac{\partial^2 Q^j}{\partial q^i \partial t} $$

and similarly

$$ \frac{d}{dt}(\frac{\partial Q^j}{\partial q^i}) = \sum_{k = 1}^{n} \frac{\partial^2 Q^j}{\partial q^i \partial q^k}\dot{q}^k + \frac{\partial^2 Q^j}{\partial q^i \partial t} $$

Skipping some painful algebra we now see that

$$ \sum_{j = 1}^{n} [ (\frac{\partial L}{\partial Q^j} - \frac{d}{dt}(\frac{\partial L}{\partial \dot{Q}^j}))\frac{\partial Q^j}{\partial q^i} ] = 0 $$

So even in the new coordinates Q the ELE still hold

$$ \frac{\partial L}{\partial Q^j} - \frac{d}{dt}(\frac{\partial L}{\partial \dot{Q}^j}) = 0 $$

This is important. We can see that the ELE hold no matter if the coordinates are normal, angular, or even accelerating. Newton's 2nd law changes form depending on the coordinates. It is $ F = ma $ in cartesian, $ \tau = I\alpha $ in angular, and much more complicated in rotating coordinates. The invariance of the ELE takes on an even more important role once we move to Lagrangian densities.

The Lagrangian also gives rise to Noether's theorem. Noether's theorem is a beautiful result that allows us to write down the conserved quantities of a system based on the symmetries of the Lagrangian. We saw in example 3 that if the Lagrangian does not depend on a coordinate $q^i$ then the corresponding momentum $p_i = \frac{\partial L}{\partial \dot{q}^i}$ is conserved. Another result of Noether's theorem is if $ \frac{\partial L}{\partial t} = 0 $ then the energy is conserved! In another blog post I'll go through a simple proof of Noether's theorem sometime in the future.

It may not be all that important, but I find it quite beautiful that the laws of mechanics can be derived from the principle of least action. The idea that nature is constantly trying to minimize the action is stunning. Interestingly once we move on to quantum mechanics the principle of least action is no longer preserved. Instead particles/fields can take on any path/configuration; however, their probability is proportional to the phase of the action. These values are dominated by the minimal actions since larger actions tend to cancel each other out. Classical mechanics is recovered since the particles will stick very closely to the classical value.

Sunday, June 12, 2016

LaTeX/Mathjax Test Post

Test of Mathjax.

Test of inline math $x^n + y^n = z^n$.

Test of display math. How about Stoke's Theorem for differential forms?
$$\int_{\Omega}d\omega = \oint_{\partial\Omega}\omega$$

\begin{equation}
\frac{dy}{dx} - p(x)y = g(x)
\label{example}
\end{equation}

We can now refer to \eqref{example}

Multiline
\begin{multline*}
\nabla\times \vec{v} = (\frac{\partial v_z}{\partial y}-\frac{\partial v_y}{\partial z})\hat{x}\\
+(\frac{\partial v_x}{\partial z}-\frac{\partial v_z}{\partial x})\hat{y}\\
+(\frac{\partial v_y}{\partial x}-\frac{\partial v_x}{\partial y})\hat{z}
\end{multline*}

Alignment
\begin{align*}
2x - 5y &= 8 \\
3x + 9y &= -12
\end{align*}

Gathering
\begin{gather*}
2x - 5y = 8 \\
3x^2 + 9y = 3a + c
\end{gather*}

Sunday, June 14, 2015

Anonymous Wrapper Functions Revisited


Some definitions:


In [1]:
# I made the fixed-point combinator using recursion for simplicity
function fix(fn)
    Z = fn( (input...) -> Z(input...) ) # named after the strict Z combinator (which I'm using)
end




Out[1]:
fix (generic function with 1 method)



In [2]:
fac = (fac) -> (n) -> if (n < 2) 1 else n*fac(n-1) end

fib = (fib) -> (n) -> if (n < 2) n else fib(n-2)+fib(n-1) end




Out[2]:
(anonymous function)



In [3]:
fix(fac)(6)




Out[3]:
720



In [4]:
fix(fib)(5)




Out[4]:
5



More wrappers:


Now to do something interesting. I want to make a trace wrapper that will print out the function name with its arguments whenever its called. First I make a type. The name holds the function name as a string. The funcall element holds the anonymous definition. Eventually I'd like to do this with mutually recursive functions.

In [5]:
type Function
    name
    funcall
end


In [6]:
function trace(f)
    (Yf) -> (input) -> begin
        output = f.funcall(Yf)(input)
        println(f.name, "(", input, ") = ", output)
        output
    end
end




Out[6]:
trace (generic function with 1 method)



In [7]:
fix( trace( Function("factorial", fac) ) )(8)




factorial(1) = 1
factorial(2) = 2
factorial(3) = 6
factorial(4) = 24
factorial(5) = 120
factorial(6) = 720
factorial(7) = 5040
factorial(8) = 40320


Out[7]:
40320



Beautiful!

In [8]:
fix( trace( Function("fibonacci", fib) ) )(6)




fibonacci(0) = 0
fibonacci(1) = 1
fibonacci(2) = 1
fibonacci(1) = 1
fibonacci(0) = 0
fibonacci(1) = 1
fibonacci(2) = 1
fibonacci(3) = 2
fibonacci(4) = 3
fibonacci(1) = 1
fibonacci(0) = 0
fibonacci(1) = 1
fibonacci(2) = 1
fibonacci(3) = 2
fibonacci(0) = 0
fibonacci(1) = 1
fibonacci(2) = 1
fibonacci(1) = 1
fibonacci(0) = 0
fibonacci(1) = 1
fibonacci(2) = 1
fibonacci(3) = 2
fibonacci(4) = 3
fibonacci(5) = 5
fibonacci(6) = 8


Out[8]:
8



Eh...less beautiful but still cool. Obviously memoization is needed to keep fibonacci from being exponential.

In [9]:
function memoize(f)
    inputs = []
    outputs = []

    (Yf) -> (input) -> begin
        # test to see if input already given
        match = find( (elem)->(elem == input), inputs )

        if length(match) == 0
            # result hasn't been found
            output = f(Yf)(input)
            inputs = [inputs, input]
            outputs = [outputs, output]
        else
            # result has already been memoized
            output = outputs[match[1]]
        end

        output
    end
end




Out[9]:
memoize (generic function with 1 method)



I like this version of memoization much better than my previous version. It's nowhere near as cluttered.

In [10]:
fix( memoize( trace( Function("fibonacci", fib) ) ) )(6)




fibonacci(0) = 0
fibonacci(1) = 1
fibonacci(2) = 1
fibonacci(3) = 2
fibonacci(4) = 3
fibonacci(5) = 5
fibonacci(6) = 8


Out[10]:
8



Yeeeeessssssssss!!!! Okay the fixed-point combinator has the ability to work with multiple inputs. How about a tail recursive fibonacci function?

Multiple inputs:


In [11]:
fib_tail = (fib) -> (n, a, b) -> if (n < 2) b else fib(n-1, b, a+b) end




Out[11]:
(anonymous function)



In [12]:
fix(fib_tail)(100, 0, 1)




Out[12]:
3736710778780434371



In [13]:
@time fix(fib_tail)(100, 0, 1)




elapsed time: 0.002422741 seconds (19344 bytes allocated)


Out[13]:
3736710778780434371



In [14]:
# better trace wrapper for functions with multiple inputs
function trace_(f)
    (Yf) -> (input...) -> begin
        println(f.name, input)
        f.funcall(Yf)(input...)
    end
end




Out[14]:
trace_ (generic function with 1 method)



In [15]:
fix( trace_( Function("fib", fib_tail) ) )(6, 0, 1)




fib(6,0,1)
fib(5,1,1)
fib(4,1,2)
fib(3,2,3)
fib(2,3,5)
fib(1,5,8)


Out[15]:
8



Tuesday, June 9, 2015

Playing with streams in Julia


Delays and yields:


We need two definitions to make streams work. First, delay is a macro that replaces a function with an anonymous function with it's body equal to the function. Later when we need to evaluate the function, we only have to call the anonymous function. This is what yield does. Yield isn't as necessary, but it makes it clear what's happening.

In [1]:
macro delay(f)
    return quote
        # put the function inside a lambda
        function ()
            $f
        end
    end
end


In [2]:
function yield(delayed_function)
    delayed_function() # call it
end




Out[2]:
yield (generic function with 1 method)



In [3]:
something = @delay(println("text"))




Out[3]:
(anonymous function)



In [4]:
yield(something)




text



Making a stream type:


Now with that done we create a stream type. The first element is the actual value held there. The second element is a delayed stream constructor. We then create head and tail to access elements of the stream. Nth takes successive head and tails of a stream.

In [5]:
type IntStream
    literal
    next

    function IntStream(num)
        self = new()
        self.literal = num
        self.next = @delay(IntStream(num+1))
        return self
    end
end


In [6]:
function head(item::IntStream)
    return item.literal
end




Out[6]:
head (generic function with 1 method)



In [7]:
function tail(item::IntStream)
    return yield(item.next)
end




Out[7]:
tail (generic function with 1 method)



In [8]:
integers = IntStream(1)




Out[8]:
IntStream(1,(anonymous function))



In [9]:
head(integers)




Out[9]:
1



In [10]:
head(tail(tail(integers)))




Out[10]:
3



In [11]:
function nth(stream, n)
    if n == 1
        return head(stream)
    else
        return nth(tail(stream), n-1)
    end
end




Out[11]:
nth (generic function with 1 method)



In [12]:
nth(integers, 50)




Out[12]:
50



More general definitions:


For more general streams it is simple to add a next method for each object.

In [13]:
type Stream
    obj
    next
    
    function Stream(obj, next) # next is the method for generating the next object
        self = new()
        self.obj = obj
        self.next = @delay(Stream(next(obj), next))
        return self
    end
end


In [14]:
function head(item::Stream)
    return item.obj
end




Out[14]:
head (generic function with 2 methods)



In [15]:
function tail(item::Stream)
    return yield(item.next)
end




Out[15]:
tail (generic function with 2 methods)



In [16]:
function nth(stream, n)
    if n == 1
        return head(stream)
    else
        return nth(tail(stream), n-1)
    end
end




Out[16]:
nth (generic function with 1 method)



Fibonacci list stream:


Now to try it on the Fibonacci numbers. Instead of making each element the nth Fibonacci number we'll make it the list of the first n+1 Fibonacci numbers.

In [17]:
function next_fib_list(list)
    return [ list, (list[end] + list[end-1]) ]
end




Out[17]:
next_fib_list (generic function with 1 method)



In [18]:
fib_list = Stream([0, 1], next_fib_list)




Out[18]:
Stream([0,1],(anonymous function))



In [19]:
nth(fib_list, 5)




Out[19]:
6-element Array{Int64,1}:
 0
 1
 1
 2
 3
 5



In [20]:
nth(fib_list, 10)




Out[20]:
11-element Array{Int64,1}:
  0
  1
  1
  2
  3
  5
  8
 13
 21
 34
 55



In [21]:
for idx = 1:15
    println(nth(fib_list, idx))
end




[0,1]
[0,1,1]
[0,1,1,2]
[0,1,1,2,3]
[0,1,1,2,3,5]
[0,1,1,2,3,5,8]
[0,1,1,2,3,5,8,13]
[0,1,1,2,3,5,8,13,21]
[0,1,1,2,3,5,8,13,21,34]
[0,1,1,2,3,5,8,13,21,34,55]
[0,1,1,2,3,5,8,13,21,34,55,89]
[0,1,1,2,3,5,8,13,21,34,55,89,144]
[0,1,1,2,3,5,8,13,21,34,55,89,144,233]
[0,1,1,2,3,5,8,13,21,34,55,89,144,233,377]
[0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,610]



It is also possible to memoize the function calls so that expensive operations aren't run more than once.