Data, Science & More

Proper statistics is Bayesian

I was already convinced years ago, by videos and a series of blog posts by Jake VanderPlas (links below), that if you do inference, you do it the bayesian way. What am I talking about? Here is a simple example: we have a data set with x and y values and are interested in their linear relationship: y = a x + b. The numbers a and b will tell us something we’re interested in, so we want to learn their values from our data set. This is called inference. This linear relationship is a model, often a simplification of reality.

The way to do this you learn in (most) school(s) is to make a Maximum Likelihood estimate, for example through Ordinary Least Squares. That is a frequentist method, which means that probabilities are interpreted as the limit of the relative frequencies for very many trials. Confidence intervals are the intervals that describe where the data would lie after many repetitions of the experiment, given your model.

In practice, your data is what it is, and you’re unlikely to repeat your experiment many times. What you are after is an interpretation of probability that describes the belief you have in your model and its parameters. That is exactly what Bayesian statistics gives you: the probability of your model, given your data.

For basically all situations in science and data science, the Bayesian interpretation of probability is what suits your needs.

What, in practice, is the difference between the two? All of this is based on Bayes’ theorem, which is a pretty basic theorem following from conditional probability. Without all the interpretation business, it is a very basic result, that no one doubts and about which there is no discussion. Here it is:

Bayes' Theorem
Bayes’ theorem reads as: The probability of A, given B (the “|B” means “given that B is true”) equals the probability of B, given A times the probability of A, divided by the probability of B. Multiplying both sides with P(B) results in the probability of A and B on both sides of the equation.

Now replace A by “my model (parameters)” and B by “the data” and the difference between inference in the frequentist vs Bayesian way shines through. The Maximum Likelihood estimates P(data | model), while the bayesian posterior (the left side of the equation) estimates P (model | data), which arguably is what you should be after.

As is obvious, the term that makes the difference is P(A) / P(B), which is where the magic happens. The probability of the model is hat is called the prior: stuff we knew about our model (parameters) before our data came in. In a sense, we knew something about the model parameters (even if we knew very very little) and update that knowledge with our data, using the likelihood P(B|A). The normalizing factor P(data) is a constant for your data set and for now we will just say that it normalizes the likelihood and the prior to result in a proper probability density function for the posterior.

The posterior describes the belief in our model, after updating prior knowledge with new data.

The likelihood is often a fairly tractable analytical function and finding it’s maximum can either be done analytically or numerically. The posterior, being a combination of the likelihood and the prior distribution functions can quickly become a very complicated function that you don’t even know the functional form of. Therefore, we need to rely on numerical sampling methods to fully specify the posterior probability density (i.e. to get a description of how well we can constrain our model parameters with our data): We sample from the priors, calculate the likelihood for those model parameters and as such estimate the posterior for that set of model parameters. Then we pick a new point in the priors for our model parameters, calculate the likelihood and then the posterior again and see if that gets us in the right direction (higher posterior probability). We might discard the new choice as well. We keep on sampling, in what is called Markov Chain Monte Carlo process in order to fully sample the posterior probability density function.

On my github, there is a notebook showing how to do a simple linear regression the Bayesian way, using PyMC3. I also compare it to the frequentist methods with stasmodels and scikit-learn. Even in this simple example, a nice advantage of getting the full posterior probability density shines: there is an anti-correlation between the values for slope and intercept that makes perfect sense, but only shows up when you actually have the joint posterior probability density function:

Admittedly, there is extra work to do, for a simple example like this. Things get better, though, for examples that are not easy to calculate. With the exact same machinery, we can attack virtually any inference problem! Let’s extend the example slightly, like in my second bayesian fit notebook: An ever so slightly more tricky example, even with plain vanilla PyMC3. With the tiniest hack it works like a charm, even though the likelihood we use isn’t even incorporated in PyMC3 (which makes it quite advanced already, actually), we can very easily construct it. Once you get through the overhead that seems killing for simple examples, generalizing to much harder problems becomes almost trivial!

I hope you’re convinced, too. I hope you found the notebooks instructive and helpful. If you are giving it a try and get stuck: reach out! Good luck and enjoy!

A great set of references in “literature”, at least that I like are:

An old love: magnetic fields on the Sun

Dutch Open Telescope on La Palma, credits: astronomie.nl and Rob Hammerschlag.

Late 2003 and early 2004 I was a third year BSc student. I was lucky enough to be taken to La Palma by my professor Rob Rutten, presumably to use the Dutch Open Telescope, a fancy looking half alien with a 45 cm reflective telescope on top, that makes some awesome images (and movies!) of the solar atmosphere. It turned out to be cloudy for two weeks (on the mountaintop, lovely weather on the rest of beautiful island, not too bad….), so my “project” became more of a literature research and playing with some old data than an observational one.

A tiny part of the solar surface
A small part of the surface of the Sun. The lighter areas are the top of convective cells where hot gas boils up. Cooling gas flows down in the darker lanes in between. The bright white points are places where the magnetic field of the Sun is strong and sticks through the surface.

I started looking into “bright points” in “intergranular lanes”. The granules, see left, are the top of convection cells, pretty much the upper layer of the boiling sun. In the darker lanes in between the cooling gas falls back down to be heated and boiled up and again later. Inside these darker lanes you sometimes see very bright spots. These are places where bundles of magnetic field stick up through the surface. These contain some less gas, so you look deeper into the Sun where it is hotter, hence the brightness of these spots.

With different filters you can choose at which height in the solar atmosphere you look (more or less; I guess you understand I take few shortcuts in my explanation here…), so we can see how such regions look higher up and see what the gas and these magnetic bundles do. Looking at a bundle from the top isn’t necessarily very informative, but when you look near the limb of the sun, you are basically getting a side view of these things:

Images near the limb of the Sun in three different filters taken simulateously. On the left you see a side view of the image above (on another part of the Sun at another time, though). Moving to the right are filters that typically see a little bit higher up in the atmosphere (the so-called chromosphere). For the interested reader: the filters are G-band, Ca II H and H-alpha. Image taken from Rutten et al. 2012.

I spent about 3 months discussing with the solar physics group in Utrecht, which still existed back in the day, trying to figure out how these magnetic fluxtubes actually work. I wrote a (admittedly mediocre quality) Bachelor’s thesis about this work that ends with some speculations. I fondly remember how my supervisor did not like my magnetic field structure based interpretation and believed it was more related to effects of the temperature structure and corresponding effect on how the radiation travels through these bits of atmosphere.

I suddenly was drawn back to this topic two weeks ago, when I read the Dutch popular astronomy magazine Zenit. It was a special about Minnaert’s famous “Natuurkunde van ‘t vrije veld” and featured an an article by the designer of the DOT, Rob Hammerschlag, and talked about observations of the Sun, including these magnetic structures. I followed some links from the article and found myself browsing the professional solar physics literature from the past decade or so, on the look for the current verdict on these “straws” in the middle image above.

A detailed simulation of the solar surface and its magnetic field, by Chitta and Cameron from the MPS in Germany.

Obviously, the answer is slightly complicated and not as simple as mine (or my professor’s!) interpretation from 2004. While I was suggesting twisted magnetic field lines to be important, it turned out likely that the relevant phenomenon that is important in these straws are torsoidal waves through the tube (waves that you can make by grabbing it, and twisting like you twist the throttle of a motorcycle). Great fun to be taken back on a journey into the solar atmosphere, just by reading a simple popular article!

I regret quitting astrophysics

In 2013 I decided to quit my career in astrophysics, move back “home” and become a data scientist. The blog post I wrote about my decision was probably my best read publication as a professional astronomer and it was moving to read all the reactions from people who were struggling with similar decisions. I meant every word in that blog post and I still agree with most of what I said. Now, 7 years after the fact, it is time to confess: I deeply regret quitting.

This post is meant to give my point of view. Many people who left academia are very happy that they did. Here I present some arguments why one might not want to leave, which I hope will be of help for people facing decisions like these.

I miss being motivated. In the first few years after jumping ship many people asked me why I would ever wanted to not be a professional astronomer. I have always said that my day-to-day work wasn’t too different, except that what I did with data was about financial services or some other business I was in, rather than about galaxies and the Universe, but that the “core activities” of work were quite similar. That is kind of true. On an hour by hour basis, often I’m just writing (Python) code to figure things out or build a useful software product. The motivation to do what you do, though, is very very different. The duty cycle and technical depth of projects are short and shallow and the emphasis of projects is much more on getting working products than on understanding. I am doing quite well (in my own humble opinion), but it is hard to get satisfaction out of my current job.

I miss academic research. The seeds of astronomy were planted at very young age (8, if I remember correctly). The fascination for the wonders of the cosmos has changed somewhat in nature while growing up but hasn’t faded. Being at the forefront of figuring things out about the workings of the Universe is amazing, and unparalleled in any business setting. Having the freedom to pick up new techniques that may be useful for your research is something that happened to me only sporadically after the academic years. The freedom to learn and explore are valuable for creative and investigative minds and it doesn’t fit as well in most business settings that I have seen.

I miss working at academic institutions. The vibe of being at a large research institute, surrounded by people who are intrinsically motivated to do what they do was of great value to me. Having visitors over from around the globe with interesting, perhaps related work was a big motivator. That journal clubs, coffee discussions, lunch talks, colloquiums etc. are all “part of the job” is something that even most scientists don’t always seem to fully appreciate. Teaching, at the depth of university level classes, as a part of the job is greatly rewarding (I do teach nowadays!).

I miss passion and being proud of what I do. The internet says I have ”the sexiest job of the 21st century”, but I think my previous job was more enjoyable to brag about at birthday parties. I can do astro as a hobby, but that simply doesn’t give you enough time to do something substantial enough.

I don’t miss … Indeed, the academic career also had its downsides. There is strong competition and people typically experience quite some pressure to achieve. The culture wasn’t always very healthy and diversity and equality are in bad shape in academia. Success criteria of your projects and of you as a person are typically better motivated in business. The obligatory nomadic lifestyle that you are bound to have as an early career scientist were a very enjoyable and educational experience, but it can easily become a burden on your personal life. The drawbacks and benefits of any career path will balance out differently for everybody. If you get to such a point, don’t take the decision lightly.

The people who questioned my decision to become an extronomer were right. I was wrong. It seems too late to get back in. I think I have gained skills and experience that can be very valuable to the astronomical community, but I know that that is simply not what candidates for academic positions are selected on. On top of that, being geographically bound doesn’t help. At least I will try to stay close to the field and who knows what might once cross my path.

Using a graphics card for neural networks

Just a short blurb of text today, because I ran into an issue that ever so slightly annoyed me last weekend. I had bought an old second-hand desktop machine lately, because I wanted to try out a couple of things that are not particularly useful on a laptop (having a server to log on to/ssh into, host my own Jupyter Hub, doing some I/O-heavy things with images from the internet, etc.). One more thing I wanted to try was using TensorFlow with a GPU in your machine. The idea was to check out whether this indeed is as simple as just installing tensorflow-gpu (note that in versions newer than 1.15 of tensorflow, there is no need to install CPU and GPU capabilities separately) and the software would figure out itself whether or not to use the GPU.

Then TensorFlow logo, taken from the Wikipedia site linked above.

“TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks. It is used for both research and production at Google.‍” (source: Wikipedia)

I use it mostly with the lovely Keras API, which allows easy set-up of neural networks in TensorFlow. I have used it, for example (disclaimer: old code…) in a blog post titled “Beyond the recognition of handwritten digits”.

It turned out that TensorFlow will indeed do this for you, but not with just any graphics card. I have an NVIDIA GeForce GT 620. This is quite an old model indeed, but it still has 96 CUDA cores and a gigabyte of memory. Sounded good to me. After loading tensorflow from a Python session you can have it detect the graphics card, so you know it can be used for the calculations:

This is a screenshot of a console in Jupyter Lab.

As you can see, it did not detect a graphics card…. There is a hint in a terminal session that is behind my jupyter session though:

Aplogies for the sub-optimal reading experience here….

Apparently, the cuda compute capability of my graphics card is lower than it should be. Only cards with cuda capability of 3.5 or higher will work (I know the terminal says 3.0, but the documentation says 3.5). On this NVIDIA page, there is a list of their cards and the corresponding cuda capabilities. Pleas check that out before trying to run TensorFlow on it (and before getting frustrated). And when you consider buying a (second-hand?) graphics card to play with, definitely make sure you buy a useful one!

Test for COVID-19 in groups and be done much quicker!

In these times of a pandemic the world is changing. On large scales, but also for people in their every day lives. The same holds for me, so I figured I could post something on the blog again, to break with the habit of not doing so. Disclaimer: besides the exercises below being of almost trivial over-simplicity, I’m a data scientist and not an epidemiologist. Please believe specialists and not me!

Inspired by this blog post (in Dutch) I decided to look at simple versions of testing strategies for infection tests (a popular conversation topic nowadays), in a rather quick-and-dirty way. The idea is that if being infected (i.e. testing positive) is rare, you could start out by testing a large group as a whole. As it were, the cotton swabs of many people are put into one tube with testing fluid. If there’s no infection in that whole group you’re done with one test for all of them! If there, on the other hand, is an infection, you can cut the group in two and do the same for both halves. You can continue this process until you have isolated the few individuals that are infected.

It is clear, though, that many people get tested more than once and especially the infected people are getting tested quite a number of times. Therefore, this is only going to help if a relatively low number of people is infected. Here, I look at the numbers with very simple “simulations” (of the Monte Carlo type). Note that these are not very realistic, they are just meant to be an over-simplified example of how group testing strategy can work.

A graphical example of why this can work is given below (image courtesy of Bureau WO):

Above the line you see the current strategy displayed: everyobody gets one test. Below, the group is tested and after we found an infection, the group is cut in halves. Those halves are tested again and those halves with an infection gradually get cut up in pieces again. This leads, in the end, to the identification of infected people. In the mean time, parts of the data without infection are not split up further and everyone in those sections is declared healthy.

Normally, by testing people one by one, you would need as many tests as people to identify all infected people. To investigate the gain by group testing, I divide the number of tests the simulation needs by this total number. The number of people in a very large population that can be tested is a factor gain higher, given a maximum number of tests, like we have available in the Netherlands.

In this notebook, that you don’t need to follow the story, but that you can check out to play with code, I create batches of people. Randomly, some fraction gets assigned “infected”, the rest is “healthy”. Then I start the test, which I assume to be perfect (i.e. every infected person gets detected and there are no false positives). For different infection rates (true percentage overall that is infected), and for different original batch sizes (the size of the group that initially gets tested) I study how many tests are needed to isolate every single infected person.

In a simple example, where I use batches of 256 people (note that this conveniently is a power of 2, but that is not necessary for this to work), I assume a overall infected fraction of 1%. This is lower than the current test results in the Netherlands, but that is likely due to only testing very high risk groups. This results in a factor 8 gain, which means that with the number of tests we have available per day, we could test 8 times more people than we do now, if the 1% is a reasonable guess of the overall infection rate.

To get a sense of these numbers for other infection rates and other batch sizes I did many runs, the results of which are summarized below:

As can be seen, going through the hassle of group testing is not worth it if the true infected fraction is well above a percent. If it is below, the gain can be high, and ideal batch sizes are around 50 to 100 people or so. If we are lucky, and significantly less than a percent of people is infected, gains can be more than an order of magnitude, which would be awesome.

Obviously, group testing comes at a price as well. First of all, people need to be tested more than once in many cases (which requires test results to come in relatively quickly). Also, there’s administrative overhead, as we need to keep track of which batch you were in to see if further testing is necessary. Last, but certainly not least, it needs to be possible to test many people at once without them infecting each other. In the current standard setup, this is tricky, but given that testing is basically getting some cotton swab in a fluid, I’m confident that we could make that work if we want!

If we are unlucky, and far more than a percent of people are infected, different strategies are needed to combine several people in a test. As always, wikipedia is a great source of information for these.

And the real caveat… realistic tests aren’t perfect… I’m a data scientist, and not an epidemiologist. Please believe specialists and not me!

Stay safe and stay healthy!

Beyond the recognition of handwritten digits

There are many tutorials on neural networks and deep learning, that use the handwritten digits data set (MNIST) to show what deep learning can give you. It generally trains a model to recognize the digits and show it’s better than a logistic regression. Many such tutorials also say something about auto-encoders and how they should be able to pre-process images, for example to get rid of noise and improve recognition on noisy (and hence more realistic?) images. Rarely, though, is that worked out into any amount of detail.

This blog post is a short summary, and a full version with code and output is available at my github. Lazy me has taken some shortcuts: I have pretty much always used default values of all models I trained, I have not investigated how they can be improved by tweaking (hyper-)parameters and I have only used simple dense networks (while convolutional neural nets might be a very good choice for improvement in this application). In some sense, the model can only get better in a realistic setting. I have compared to simple but popular other models, and sometimes that comparison isn’t very fair: the training time of the (deep) neural network models often is much longer. It is nevertheless not very easy to define a “fair comparison”. That said, going just that little bit beyond the recognition of the handwritten set can be done as follows.

The usual first steps

To start off, the data set of MNIST has a whole bunch of handwritten digits, a random sample of which looks like this:

The images are 28×28 pixels and every pixel has a value ranging from 0 to 255. All these 784 pixel values can be thought of as features of every image, and the corresponding labels of the images are the digits 0-9. As such machine learning models can be trained, to categorize the images into 10 categories, corresponding to the labels, based on 784 input variables. The labels in the data set are given.

Logistic regression can do this fairly well, and get roughly 92% of the labels right on images it has not seen while training the model (an independent test set), after being trained on about 50k such images. A neural network with one hidden layer (of 100 neurons, the default of the multi-layer perceptron model in scikit-learn) will get about  97.5% right, a significant improvement to the simple logistic regression. The same model with 3 hidden layers of 500, 200 and 50 neurons respectively will further improve that to 98%. When similar models are implemented in Tensorflow/Keras, with proper activation functions, will get about the same score. So far, so good, and this stuff is in basically every tutorial you have seen.

Let’s get beyond that!

Auto-encoders and bottleneck networks

Auto-encoders are neural networks that typically use many input features, then go through a narrower hidden layer and then as output reproduce the input features. Graphically, it would look like this, with the output being identical to the input:

This means that, if the network performs well (i.e. if the input is reasonably well reproduced), all information about the images is stored into a smaller number of features (equal to the number of neurons in the narrowest hidden layer), which can be used as compression technique for example. It turns out that this also works very well to recover images from noisy variants of the images (the idea being that the network figures out the important bits, i.e. the actual image, from the unimportant, i.e. the noise pixels).

I created a set of noisy MNIST images looking, for 10 random examples, like this:

A simple auto-encoder, with hidden layers of 784, 256, 128, 256 and 784 neurons (note the symmetry around the bottleneck layer!), respectively does a fair job at reproducing noise-free images:

It’s not perfect, but it is clear that the “3” is much cleaner than it was before “de-noising”. A little comparison of recognizing the digits on noisy versus de-noised images shows that it pays to do such a pre-processing step before: the model trained on the clean images only recovers 89% of the correct labels on the noisy images, but 94% after de-noising (a factor 2 reduction in the number of wrongly identified labels). Note that all of this is done without any optimization of the model. The Kaggle competition page on this data set shows many optimized models and their amazing performance!

Dimension reduction and visualization

To investigate whether any structure exists in the data set of 784 pixels per image, people often, for good reasons, resort to manifold learning algorithms like t-SNE. Such algorithm go from 784 dimensions to, for example, 2, thereby keeping local structure intact as much as possible. A full explanation of such algorithms goes beyond the scope of this post, I will just show the result of it here. The 784 pixels are reduced to two dimensions and in this figure I plot those two dimensions against each other, color coded by the image label (so every dot is one image in the data set):

The labels seem very well separated, suggesting that reduction of dimensions can go as far as down to two dimensions, still keeping all the information to recover the labels to reasonable precision. This inspired me to try the same with a simple deep neural network. I go through a bottleneck layers of 2 neurons, surrounded by two layers of 100 neurons. Between the input and that there is another layer of 500 neurons and the output layer obvioulsy has 10 neurons. Note that this is not an auto-encoder: the output are the 10 labels, not the 784 input pixels. That network recovers more than 96% of the labels correctly! The output of the bottleneck layer can be visualized in much the same way as the t-SNE output:

It looks very different from the t-SNE results, but upon careful examination, there are similarities in terms if which digits are more closely together than others, for example. Again, this distinction is good enough to recover 96% of the labels! All that needs to be stored about images is 2 numbers, obtained from the encoding part of the bottleneck network, and using the decoding bit of the network, the labels can be recovered very well. Amazing, isn’t it?

Outlook

I hope I have shown you that there are a few simple steps beyond most tutorials that suddenly make these whole deep neural network exercises seem just a little bit more useful and practical. Are there applications of such networks that you would like to see worked out in some detail as well? Let me know, and I just might expand the notebook on github!

Hacking for a future data flood

Astronomy has always been a “big data science”. Astronomy is an observational science: we just have to wait, watch, see and interpret what happens somewhere on the sky. We can’t control it, we can’t plan it, we can just observe in any kind of radiation imaginable and hope that we understand enough of the physics that governs the celestial objects to make sense of it. In recent years, more and more tools that are so very common in the world of data science have also penetrated the field of astrophysics. Where observational astronomy has largely been a hypothesis driven field, data driven “serendipitous” discoveries have become more commonplace in the last decade, and in fact entire surveys and instruments are now designed to be mostly effective through statistics, rather than through technology (even though it is still stat of the art!).

In order to illustrate how astronomy is leading the revolutions in data streams, this infographic was used by the organizers of a hackathon I went to nearing the end of April:
Streams and volumes of data!

The Square Kilometer Array will be a gigantic radio telescope that is going to result in a humongous 160 TB/s rate of data coming out of antennas. This needs to be managed and analysed on the fly somehow. At ASTRON a hackathon was organized to bring together a few dozen people from academia and industry to work on projects that can prepare astronomers for the immense data rates they will face in just a few years.

As usual, and for the better, smaller working groups split up and started working on different projects. Very different projects, in fact. Here, I will focus on the one I have worked on, but by searching for the right hash tag on twitter, I’m sure you can find info on many more of them!

ZFOURGE

We jumped on two large public data sets on galaxies and AGN (Active Galactic Nuclei: galaxies with a supermassive black hole in the center that is actively growing). One of them was a very large data set with millions of galaxies, but not very many properties of every galaxy (from SDSS), the other, out of which the coolest result (in my own, not very humble opinion) was distilled was from the ZFOURGE survey. In that data set, there are “only” just under 400k galaxies, but there were very many properties known, such as brightnesses through 39 filters, derived properties such as the total mass in stars in them, the rate at which stars were formed, as well as an indicator whether or not the galaxies have an active nucleus, as determined from their properties in X-rays, radio, or infrared.

I decided to try something simple and take the full photometric set of columns, so the brightness of the objects in many many wavelengths as well as a measure of their distance to us into account and do some unsupervised machine learning on that data set. The data set had 45 dimensions, so an obvious first choice was to do some dimensionality reduction on it. I played with PCA and my favorite bit of magic: t-SNE. A dimensionality reduction algorithm like that is supposed to reveal if any substructure in the data is present. In short, it tends to conserve local structure and screw up global structure just enough to give a rather clear representation of any clumping in the original high dimensional data set, in two dimensions (or more, if you want, but two is easiest to visualize). I made this plot without putting in any knowledge about which galaxies are AGN, but colored the AGNs and made them a bit bigger, just to see where they would end up:
t-SNE representation of galaxy data from ZFOURGE

To me, it was absolutely astonishing to see how that simple first try came up with something that seems too good to be true. The AGN cluster within clumps that were identified without any knowledge of the galaxies having an active nucleus or not. Many galaxies in there are not classified as AGN. Is that because they were simply not observed at the right wavelengths? Or are they observed but would their flux be just below detectable levels? Are the few AGN far away from the rest possible mis-classifications? Enough questions to follow up!

On the fly, we needed to solve some pretty nasty problems in order to get to this point, and that’s exactly what makes these projects so much fun to do. In the data set, there were a lot of null values, no observed flux in some filters. This could either mean that the observatory that was supposed to measure that flux didn’t point in the direction of the objects (yet), or that there was no detected flux above the noise. Working with cells that have no number at all or only upper limits on the brightness in some of the features that were fed to the machine learning algorithm is something most ML models are not very good at. We made some simple approximations and informed guesses about what numbers to impute into the data set. Did that have any influence on the results? Likely! Hard to test though… For me, this has sprung a new investigation on how to deal with ML on data with upper or lower limits on some of the features. I might report on that some time in the future!

The hackathon was a huge success. It is a lot of fun to gather with people with a lot of different backgrounds to just sit together for two days and in fact get to useful results, and interesting questions for follow-up. Many of the projects had either some semi-finished product, or leads into interesting further investigation that wouldn’t fit in two days. All the data is available online and all code is uploaded to github. Open science for the win!

SAS in a Pythonista’s workflow

Most who know me will probably know that I’m a major fan of Python in particular and open source software in general. Still, in my day job I use SAS quite a lot. Because I will be at the SAS Global Forum in Denver next week, I thought it would be a good time to write about the place of SAS in my work.


First of all, I can obviously not pay for the massive license fees of software like SAS, which also is my biggest objection to using it. So when I do anything on a freelance basis, it will be purely Python, in almost all assignments. In my day job I arrived, now 4,5 years ago, in a team that even was called the “SAS team”. Their main, and in fact for a long time only, tooling was SAS Base. After fighting for a good two years we do now have a decent Anaconda installation with a Jupyter Hub which makes for much faster data exploration, easier analytics and easier and much prettier visualization (no, SAS Visual Analytics is *not* a visualization tool, it’s a dashboarding tool, enormous difference).

Alright, so I can use Python and R and I still use SAS? Indeed I do. I think every tool is good for something. Part of our team is mainly occupied with ETL (extract, transform, load) work, building a good, stable, clean and well structured data warehouse that others (like me) can use at ease. For such work, SAS has some great tools. Moving data from a data base that is designed to make a particular application work smoothly into a data warehouse, that keeps track of the history of that data base as well is not a trivial task and keeping track of all relevant input data base changes is quite some work. This is done with SAS (by others) and results in a data warehouse stored in SAS data sets that I consequently use for data analysis, machine learning, you name it.

From a data warehouse in SAS tables, I find it easiest to do my data collection, merging, aggregation and all that prerequisite work one always needs for analytics in SAS as well. Most of my projects start with data wrangling in SAS, and when I get to a small number of tables that are ready for analysis, analytics or visualization, I move over to Python (where the fun starts).

Yes, I do have my strong opinions about many SAS tools, about SAS as a company, about how they work and about how the world (e.g. auditors, authorities) look at SAS versus how they look at software from the open source realms. Still, I use a tool that is good at what I need from it, if I have access to it. More and better interfacing between Python, R and SAS is well underway and I hope the integration between the two will only become smoother. Hoping that SAS would vanish from my world is just an utterly unrealistic view of the world.

Why upgrade your OS if you don’t need to?

Two weeks ago I had nothing to do on a Saturday afternoon. Geeky as I am, I thought “let’s upgrade Xubuntu” on my laptop. I was still on the 16.04 Long Term Support version and it is 2018 by now, after all. A month or so later the new LTS version, 18.04, would appear, but why wait? That afternoon I had nothing better to do; when the new version comes out, I probably will be occupied. As usual, I googled around about a version upgrade from 16.04 to 17.10 and was, by every single site I encountered on the topic, strongly advised not to try this. But alas, I’m as stubborn as data scientists get and went ahead anyway.

Ouch. There seemed to be no problem at all. The upgrade was fine, except for some small weird looking error from LibreOffice, which I hardly ever use anyway. Everything worked as it worked before, even with the same looks (then why would you upgrade, right?). I played a game of Go for a bit and then went and let all other software do their updates as well. The usual reboot went blazing fast. I was prompted for my password and then the shit hit the fan: black screen. Nothing. Light from under the keyboard, but that was the only sign of life.

That is when you all of a sudden need a second computer. In all my confidence I had not created a bootable USB stick before the upgrade, and I didn’t have an older one with me either. With a terribly slow and annoying Windows laptop I managed to create one and booted the laptop from that. All fine. Good, so let’s replace the new and corrupted OS with something from this stick. Apparently, when I previously had installed my laptop, there seemed to be no need for a mount point for /efi, and now there was. Annoyingly, this needed to be at the start of the partition table, screwing up pre-existing partitions on that disk. At that point I decided: alright, format the disk, decently partition the disk and a fresh install would make my dear laptop all young and fresh again. There are two SSDs in the machine, and there was a back up of my home on the second disk.

The new install works like a charm, and restoring a back up also calls for some more cleaning up. It all looks tidy again.

I know, not everybody will be a fan of the old-school conky stuff on the desktop, but it just tickles my inner nerd a bit (and I actually do use it to monitor the performance when something heavy is running, I sure like it better than a top screen!). Took a decent part of a day, altogether, so next time I might take the advise of fellow geeks a bit more seriously. On the other hand, after mounting the second disk and finding the backups intact, the short moment of relief is worth it.

Off we go with a clean machine!

Craft beer!

 

One of my major interests outside the geeky realms is craft beer. I’m an avid sampler, try to go to beer festivals whenever I can, open Untappd almost every day, and I sometimes tweet about beer under a different twitter handle than my own: @BeerTweep.

 

In Dutch I have written (and hope to wrote again soon!) content for Bierista. You may be too lazy to go look on that site, so here’s a list of the specific things I wrote on there (newest first):