sklearn Source Code tree – Part I

This post, actually this upcoming series of posts, will be focused on gaining more knowledge of the exactly implementation of sklearn. Not only how in depth the algorithm got implemented, but also learning the best practices and styles of one of the most popular python library or even machine learning library out there. And today focus will be looking at the _tree.pyx.

To really dive into the details of trees, one has to be familiar with the underlying data structure used for implementing a decision tree, or a tree in general. Under the _utils.pxd, can you easily find the declarations for two key data structures Stack and Priority Heap and its relevant implementation atomic unit – StackRecord and PriorityHeapRecord.

Screen Shot 2019-10-13 at 9.59.39 PM

No surprise, Stack is the commonly used data structure which supports FILO logic, hence, we have the push and pop method. Each record is actually fairly interesting which we are going to future explain what each attribute is being used for.

After that, it is another data structure called priorityHeap.

Screen Shot 2019-10-13 at 10.07.12 PM

I came across a great post about PriorityQueue with BinaryHeap which you can find the more interesting reading and Python implementation here. At a very high level, the regular Stack is used for depth first tree builder and the PriorityHeap is used for the best first tree builder. In the ideal world, both tree builders will lead to the same final tree but one is learning faster and usually is preferred when we need cut off the learning process early with pruning (like decision stumps in GBM). To simplify, we will start by focusing on depth first tree builder.

Now let’s switch our eyeballs to the _tree.pxd.

Like StackRecord, the atomic unit of a tree is node, and each node is made of its left and right child (identified by the ID), the split feature, the threshold (regression), impurity gain during split, and others.

Then let’s take a look at the Tree class’es attributes. Node* and double* are the two pointers/arrays that store the true content of a decision tree.

Screen Shot 2019-10-13 at 10.23.34 PM

Now we have skimmed through the basic data structures, let’s switch to the _tree.pyx implementation and take a look.

Screen Shot 2019-10-13 at 10.42.51 PM

The whole _tree.pyx isn’t quite complex, only ~1600 LOC and if we are only interested in the Tree class implementation and the easiest tree builder DepthFirstTreeBuilder, you only need to read a few hundreds of lines of code. So let’s get started.

At the beginning, they first declared a TreeBuilder class as the basic interface which further got extended into different types of TreeBuilder (depth first or best first). It only has an internal method _check_input to ensure the data is contiguous.

Screen Shot 2019-10-13 at 10.29.41 PM.png

Across the whole implementation, there are numerous places that for performance reasons making calls to compress sparse matrix and others. Those functions play a pivotal role regarding making a python library fast enough but itself might deserve a dedicated series and less relevant to the tree implementation which we will skip for now.

Screen Shot 2019-10-13 at 10.51.05 PMThe constructor of DepthFirstTreeBuilder includes several key parameters when builder a tree. Splitter is the various splitter implementation which we will cover later. Now let’s go through the build method and see how each attribute drives the building process.

max_depth determines the maximum depth of the decision tree. As decision is a binary tree, when it is complete, the number of nodes grow exponentially. For example if you have 1 level, there are total 1 node, which is the root, if you have 2 levels, you have 3 nodes, and if you have three levels, you have 7 nodes, and when you have N levels, you will need 1+2+4+… = 2^0 + 2^1 + 2^2 + ..  2^ (N-1) = 2^N – 1 in total.

And as you can tell from the first few steps of the build method, that is exactly how max_depth is being used. Screen Shot 2019-10-13 at 10.56.29 PM.png

The next steps will be actually building the tree by working on the popped record and pushing its two children iteratively.

Screen Shot 2019-10-13 at 11.07.57 PM

As you can tell from the code, the stack will pop each record and replace with two children if any. And that is also the reason that why the stack has the size of INITIAL_STACK_SIZE which is 10, the same as the depth of the initial tree capacity. In this way, it will first build/traverse the left most branch and then bottom up, slowly transition to the right and traverse the whole tree with only a stack of 10 records. 

Now, let’s take a look at how splitter got called in the depth first tree building process.

Screen Shot 2019-10-13 at 11.36.10 PM

In the next post, we will spend more time looking into the node_split method and tree._add_node method to further understand the tree building details.

 

 

 

Python functools lru_cache

LRU_Cache stands for least recently used cache. I understand the value of any sort of cache is to save time by avoiding repetitive computing. Usually you store some computed value in a temporary place (cache) and look it up later rather than recompute everything. Functools is a built-in library within Python and there is a decorate lru_cache which is designed to help Python developers achieve similar goals.

So I have a dummy problem here, instead of the Fibonacci problem, it is even more exhaustive as the new item in the array need to be the sum of all its previous items plus 1. The level of complexity goes exponentially.

Screen Shot 2019-07-28 at 10.09.53 PM

Clearly, by computing n = 24 is already taking more than 6 seconds. However, by decorating the function using the lru_cache, it is as quick as 4 millisecond. You can also find out the cache info, the sheer amount of hits is a the secret of why the function has been sped up so much.

Screen Shot 2019-07-28 at 9.35.45 PM

The performance acceleration is outstanding and based on the definition of Dynamic programming, this is almost a necessity so developers can focus all their efforts into decomposing the problem into subproblems rather than worrying about manually storing hashtable somewhere for loop up.

Screen Shot 2019-07-28 at 8.37.29 AM

Attached is an example of by using the lru_cache decorator, how you can come up with a solution that outperform 100% of the Python solutions out there from leetcode execution time and space wise.

If you are interested in looking under the hood, it isn’t quite complex as all the utilities are written in Python rather than Cython, however, developers are only supposed to access the lru_cache through clear_cache or cache_info because it is believed that messing with cache through an threaded environment will cause unnecessary trouble. I tried to access some of the its private and internally attributes but failed to access the cache due to the fact that cache lives within the namespace of the wrapper and it is not accessible outside the function. This might be an interesting challenge to understand how to get it working.

James Powell 2017 Pydata talk – Python Expert

Mr. James Powell has given this great talk at 2017 Pydata at Seattle about some of the advanced features and concepts in Python (using Python3 but most features also apply to Python2).

Here is a list of some of the highlights that Mr. Powell covered which I want to listed here for later reference:

  • Data model – “dunder method”, double underline or data model
  • Library/user – assert, metaclass, subclass
  • Decorators – @ handy way of calling up a wrapper function
  • Generator – sequential, intermitting and memory efficient yield, __iter__, __next__
  • contextmanager – __enter__, __exist__

In the end, I came across this glossary page from Python’s documentation website which doesn’t hurt to use as a checklist or challenge.

CDN and Github – jsDelivr

Content Delivery Network (CDN)

In HTML, there are many tags, especially the ones related to Javascript requires reference certain script, also somethings requires link to certain stylesheet by including a CSS file in the link tag. However, there are times which you can include all the necessary dependencies as static at the same environment where the site hosts, by including the relative path, or you can add in the complete path in a URL format that can be hosted anywhere on the internet (usually on a CDN Content Delivery Network).

There are several benefits to it:

  1. Effectively offload the serving of those files to CDN servers (load balancing, performance optimization, etc.)
  2. The libraries and content is more abundant and complete at a central place like a CDN, so developer doesn’t have to shop around on the internet and download each dependencies and organize them on your own site for commonly used ones.

There are also cases in which you don’t even have full control over the site that you are working on. For example, you could be developing certain subsection of an important website which you only have limited permission to edit certain section, uploading dependencies is not an option. Also, if you are writing a Chrome extension, you could be injecting certain script into the target sites to manipulate the page, however, it is not realistic for you to upload your dependencies to like github.com/mydependency.js.

Of course, CDN is way beyond just serving little script but can expand to any kind of content serving.

JSDelivr

There are several sites like cdnjs.com which has plenty of Javascript modules or libraries. I came across this site called JSDelivr which looks like cdnjs.com but it has a few cool features like you can refer to any Github repos.

Screen Shot 2019-07-04 at 12.05.22 PM

Of course, you can refer to any files on Github directly by using the link to the raw file hosted on Github. However, Github is just not meant to serve as a CDN and this solution sometime not as straightforward depending on the files types.

Screen Shot 2019-07-04 at 12.27.50 PM

By using jsdelivr, you can simply prefix the Github path by some jsdelivr URL and you are good to have. I have managed to replace all my reference to certain Github material using jsdelivr and it works great.

 

Laoshu50500

I know this post might be a little unorthodox but I just cannot wait to share this amazing Youtube channel laoshu50500 with the folks who might read my blog.

As a non-native English speaker, I have came across plenty of practitioners who claim to be bilingual, trilingual or multilingual, most of them mastered the foreign languages either by growing up in a diverse environment or affording the privilege of attending some sort of school and receive certain training.

The Youtuber Moses totally redefined all of my impression of language study by posting videos about how he practice foreign languages by self teaching and constant communicating. He brought so much happiness to the people around them, strangers just met by recognizing their identity, respecting their culture, and most importantly, working hard (maybe not that hard as he must be smart 🙂 ) to literally speak their language to show respect. It is not that one guy that can speak so many language impressed me the most, it is his humble attitude and his deep desire to practice, to learn and to communicate with another individual on such an equal basis that makes wonder, if everyone in a world spend just a little time to work hard and think/speak from a totally different identity, how much better this world will become.

code HTML and CSS using VS Code

I am testing some front-end code and saw several youtube videos using VS code as the IDE. As a Python developer, it can be overwhelming at the first glance to see SO many lines of code just in general. However, it is like a magic to see how fluent front end developers leverage tools like VS Code and its extensions to pretty much auto generate the code they want with only a few key strokes. This is a post to show some the shortcuts that I came through today.

I do have to admit that VSCode’s default dark theme make it look simple and tidy. However, as you spend more time on it, you also realize that it has most of the features that you require out of a heavy duty IDE like Eclipse or PyCharm, at the same time, as extensible as sublime.

Screen Shot 2019-06-30 at 10.37.38 PM

Like any IDE, VS Code comes with several shortcuts. Here is a printable cheatsheet which you can refer to on a constant basis, including quick comment, open, close and many others.

The most useful one for me is to use Cmd+K, Cmd+S open the shortcut cheatsheet within VSCode. (maybe there are so many key bindings that we have to get to what we need using two key strokes, many of the shortcuts within VS Code starts with Cmd+K)

Many of the tricks were straightly picked up from MS VS Code website, which includes basic features like auto complete, auto closing (as HTML has lots of <whatever> and </whatever> which is easy to miss).

Can you imagine that you only need 15 characters to generate 107 worth of HTML block? it not only thanks to Intellisense within VSCode, but most importantly, the Emmet Abbreviations which frontend developers like a lot.

Screen Shot 2019-06-30 at 10.24.27 PM

In this case, each character is the short abbreviation for certain syntax:

  • dot (.) as default is referring to the class of a div tag
  • greater sign (>) is moving down the DOM tree
  • sharp sign (#) refers to the tag id
  • dollar sign ($) refers to auto numbering
  • asterisk (*) refers to the code block multiplication

You can refer to the Emmet’s website for more information

“Sharpening the axe will not interfere with the cutting of firewood.” Finding a good editor before you start spending lots of time coding is probably time well spent.

 

 

Wikidata – Histropedia

This is a great video from Ewan McAndrew’s youtube channel with Navino explaining how wikidata works and most importantly, how to visualize a timeline written in Sparql in histropedia.

To learn more about wikidata itself which is a great data source for folks want to tinker with natural language and knowledge base, check out the main page of wikidata.

Screen Shot 2019-06-29 at 9.43.03 AM

Geforce Now – Game running in the Cloud

This is a video that I took with startcraft II in ultra setting running in the Cloud thanks to Geforce NOW.

First, here are some “lowlights” of my gaming machine:

  • CPU: Processor AMD FX(tm)-6120 Six-Core Processor, 3500 Mhz, 3 Core(s), 6 Logical Processor(s)
  • GPU: GTX 1050ti (upgraded)
  • Memory: 16 GB (upgraded)

Now, let’s get to my experience of how Geforce Now surprised me.

I came across an activation code in my email inbox that Nvidia actually granted me the access to the Geforce Now free beta. I decided to give it a try and it turned out the experience was fantastic. In essence, it is to off load your gaming machine from doing all the heavy computing, instead, run the game on Nvidia hosted virtual environment and of course, you have to have reasonable and stable network to get the full value of it.

My office is in the second floor and the router is on the first. The wireless internet connection is mediocre so this test isn’t really the best representation of the full capability of Geforce Now. I am tested Starcraft II, Diablo III and battleground and all three of them performed really well.

The lagging is minimized to the internet connection, for Starcraft II players like me who doesn’t have a 300 APM, that lagging is trivial and doesn’t now really impact the gaming experience, but I am assuming if you are playing with any competitive shooting game, that few ms might matter. Anything else should be perfectly fine. I even bought battleground on the fly because my computer was never capable of running it and now I can play it on the Cloud, I spent quite a few minutes just staring at the sky rendered by those crazy machines in the cloud.

I see this literally as a game changer because by pooling all the gaming computing power into one centralized place, this should theoretically drop the total costing of each household spend thousands of dollars on getting the best gear on their own. However, I don’t think a company is running a charity but to maximize their shareholder financial benefits. As an end consumer, I know that the internet is getting faster and better (like 5G), if Nvidia is asking me should I buy a gaming PC or use their service, I might be willing to pay the subscription to play Geforce Now if the monthly subscription fee is close or lower to the monthly depreciation of the hardware.

Say a gaming machine is $2,000 and you expect to get the full usage of it and replace in three years. 2000/3/12 ~ $55/month. Of course, you don’t buy computers only to play games but for many gamers, they do upgrade their gear only because of gaming performance. Also, take into consideration that you can unsubscribe if you are taking a long vacation or busy working, it pays back.

Anyway, good job to Nvidia as usual and this made me wonder if our next generation will be asking the question “hey, daddy, what is that big black box? shouldn’t everything run on a TV directly?” 🙂

geforcenow

Download Geforce Now beta test

geforcenow_internet

Run a test. My internet is on the low end and far from the router but still working.

geforce_now_login

Looks like from this step, it is already running on a Windows virtual machine. I am assuming they are collecting all the information like IP address, hardware spec in order to align the cloud resource to be best compatible with the consumer terminal.

 

Works perfect for me.