Creating a Python package isn’t typically something a developer does routinely, so when it happens, you may end up losing time and nerves in small details you forgot.
Here is a (hopefully) simple gentle guide easy to follow for proceeding. Just take it as a small recipe when you need to cook your Python package once in a while.
Let’s go step by step:
pip install setuptools wheel twine
The first-level folder structure of your project (from now on called mypackage) should look like this:
|___ LICENCE |___ mypackage/…
Yesterday talking with one of my machine learning students Thomas Mathieu in Lewagon, we got an idea: what if the parameters of a machine learning model could somehow encode the data used to train this model? With nice properties on top of it?
A hash function is a function taking an input and giving an output such that:
1. it’s (nearly) impossible to find out the input from the output
2. the output is of fixed size
3. given an input, you get always the same output
Invented by Conway in 1970, the game of life is a quite fascinating application of cellular automata with deep implications in computer science and mathematics. Let’s just see here how to make a pretty flexible multi-valued and multi-dimensional implementation of them leveraging numpy.
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import colors
from IPython.display import display, Imagecmap = colors.ListedColormap([‘blue’, ‘white’, ‘red’])
N = 15
We begin with defining a
NxN grid with random integer values in (0, 1, 2) on its squares:
arr = np.random.randint(3, size=N * N, dtype=np.int8).reshape(N, N)
What is the Fibonacci sequence? It’s easy to define: the first element is 1, the second is 2, and the following elements are the sum of the two previous ones: the 3rd element is 3 (2 +1), the 4th is 5 (3+2), the 5th is 8 (5+3) etc.
Tonight on the Python Discord channel, a user came up with a challenge: find a faster Python implementation to compute the elements of the Fibonacci sequence than this one:
Challenge accepted. I already knew there is a closed form (direct formula) to compute the Fibonacci values, so I thought it would be…
Gradient descent is omnipresent in the training of neural networks and other machine learning models. Each time you have a model which parameters are updated during the training according to a loss function, you can be almost certain that there is gradient descent under the hood. Here we will see how gradient descent works along a very simple linear regression example.
But before starting, what is a gradient and why descend it?
Imagine you have to approximate a dataset (X, y) where X represents the features of your datapoints and y the target values (to predict) on those datapoints. The…
Note from the editors: Towards Data Science is a Medium publication primarily based on the study of data science and machine learning. We are not health professionals or epidemiologists, and the opinions of this article should not be interpreted as professional advice. To learn more about the coronavirus pandemic, you can click here.
Unless you spent these last few months in a cave in the end of the world (but then you’re probably not reading this article), you couldn’t escape the information about the Covid-19 pandemic. The difficulty to model its evolution is striking. In the US for example, a…
Data scientist and backend engineer for many years now