Are you over 18 and want to see adult content?
More Annotations
A complete backup of https://amigoloans.co.uk
Are you over 18 and want to see adult content?
A complete backup of https://justdisney.com
Are you over 18 and want to see adult content?
A complete backup of https://lamaisondesartistes.fr
Are you over 18 and want to see adult content?
A complete backup of https://sonestainns.com
Are you over 18 and want to see adult content?
A complete backup of https://imperialtea.com
Are you over 18 and want to see adult content?
A complete backup of https://omcan.com
Are you over 18 and want to see adult content?
A complete backup of https://loriswebs.com
Are you over 18 and want to see adult content?
A complete backup of https://writemypapers4me.com
Are you over 18 and want to see adult content?
A complete backup of https://thinkcompany.com
Are you over 18 and want to see adult content?
A complete backup of https://oscarmayer.com
Are you over 18 and want to see adult content?
A complete backup of https://zagorje.com
Are you over 18 and want to see adult content?
Favourite Annotations
A complete backup of https://fountainbookstore.com
Are you over 18 and want to see adult content?
A complete backup of https://livelighter.com.au
Are you over 18 and want to see adult content?
A complete backup of https://streamakaci.com
Are you over 18 and want to see adult content?
A complete backup of https://e5na4fdwvo.ga
Are you over 18 and want to see adult content?
A complete backup of https://bloodshed.net
Are you over 18 and want to see adult content?
A complete backup of https://usiouxfalls.edu
Are you over 18 and want to see adult content?
A complete backup of https://takarasake.com
Are you over 18 and want to see adult content?
A complete backup of https://midori-japan.co.jp
Are you over 18 and want to see adult content?
A complete backup of https://gamberorosso.it
Are you over 18 and want to see adult content?
A complete backup of https://dosclases.com
Are you over 18 and want to see adult content?
A complete backup of https://seilbahnen.de
Are you over 18 and want to see adult content?
A complete backup of https://bluedragon.org
Are you over 18 and want to see adult content?
Text
NIKOLAI JANAKIEV
Google Analytics Analytics with Python. Google Analytics is a powerful analytics tool found in an astonishing number of websites. In this tutorial, we will take a look at how to access the Google Analytics API (v4) with Python and Pandas. Additionally, we will take a look at the various ways to analyze your tracking data and create customreports.
CLASSIFYING THE IRIS DATA SET WITH PYTORCH Classifying the Iris Data Set with PyTorch. 27 Sep 2020. In this short article we will have a look on how to use PyTorch with the Iris data set. We will create and train a neural network with Linear layers and we will employ a Softmax activation function and the Adam optimizer. HOW TO INSTALL PRESTO OR TRINO ON A CLUSTER AND QUERY How to Install Presto or Trino on a Cluster and Query Distributed Data on Apache Hive and HDFS 17 Oct 2020. Presto is an open source distibruted query engine built for Big Data enabling high performance SQL access to a large variety of data sources including HDFS, PostgreSQL, MySQL, Cassandra, MongoDB, Elasticsearch and Kafka among others.. Update 6 Feb 2021: PrestoSQL is now USING VIRTUAL ENVIRONMENTS IN JUPYTER NOTEBOOK AND PYTHON The virtual environment can be found in the myenv folder. For Python >= 3.3, you can create a virtual environment with: python -m venv myenv. After you have created your virtual environment, you can activate the virtual environment with: source myenv/bin/activate. To deactivate the virtual environment, you can run deactivate. QUERYING S3 OBJECT STORES WITH PRESTO OR TRINO Querying big data on Hadoop can be challenging to get running, but alternatively, many solutions are using S3 object stores which you can access and query with Presto or Trino. In this guide you will see how to install, configure, and run Presto or Trino on Debian or Ubuntu with the S3 object store of your choice and the Hive standalonemetastore.
GOOGLE ANALYTICS ANALYTICS WITH PYTHON Google Analytics is a powerful analytics tool found in an astonishing number of websites. In this tutorial, we will take a look at how to access the Google Analytics API (v4) with Python and Pandas. Additionally, we will take a look at the various ways to analyze your CLASSIFYING THE IRIS DATA SET WITH KERAS Classifying the Iris Data Set with Keras 04 Aug 2018. In this short article we will take a quick look on how to use Keras with the familiar Iris data set. We will compare networks with the regular Dense layer with different number of nodes and we will employ a Softmax activation function and the Adam optimizer.. Data Preperation FRAMING PARAMETRIC CURVES UNDERSTANDING THE COVARIANCE MATRIX RUNNING A PYTHON SCRIPT IN THE BACKGROUND 19 Oct 2018. This is a quick little guide on how to run a Python script in the background in Linux. First, you need to add a shebang line in the Python script which looks like the following: #!/usr/bin/env python3. This path is necessary if you have multiple versions of Python installed and /usr/bin/env will ensure that thefirst Python
NIKOLAI JANAKIEV
Google Analytics Analytics with Python. Google Analytics is a powerful analytics tool found in an astonishing number of websites. In this tutorial, we will take a look at how to access the Google Analytics API (v4) with Python and Pandas. Additionally, we will take a look at the various ways to analyze your tracking data and create customreports.
CLASSIFYING THE IRIS DATA SET WITH PYTORCH Classifying the Iris Data Set with PyTorch. 27 Sep 2020. In this short article we will have a look on how to use PyTorch with the Iris data set. We will create and train a neural network with Linear layers and we will employ a Softmax activation function and the Adam optimizer. HOW TO INSTALL PRESTO OR TRINO ON A CLUSTER AND QUERY How to Install Presto or Trino on a Cluster and Query Distributed Data on Apache Hive and HDFS 17 Oct 2020. Presto is an open source distibruted query engine built for Big Data enabling high performance SQL access to a large variety of data sources including HDFS, PostgreSQL, MySQL, Cassandra, MongoDB, Elasticsearch and Kafka among others.. Update 6 Feb 2021: PrestoSQL is now USING VIRTUAL ENVIRONMENTS IN JUPYTER NOTEBOOK AND PYTHON The virtual environment can be found in the myenv folder. For Python >= 3.3, you can create a virtual environment with: python -m venv myenv. After you have created your virtual environment, you can activate the virtual environment with: source myenv/bin/activate. To deactivate the virtual environment, you can run deactivate. QUERYING S3 OBJECT STORES WITH PRESTO OR TRINO Querying big data on Hadoop can be challenging to get running, but alternatively, many solutions are using S3 object stores which you can access and query with Presto or Trino. In this guide you will see how to install, configure, and run Presto or Trino on Debian or Ubuntu with the S3 object store of your choice and the Hive standalonemetastore.
GOOGLE ANALYTICS ANALYTICS WITH PYTHON Google Analytics is a powerful analytics tool found in an astonishing number of websites. In this tutorial, we will take a look at how to access the Google Analytics API (v4) with Python and Pandas. Additionally, we will take a look at the various ways to analyze your CLASSIFYING THE IRIS DATA SET WITH KERAS Classifying the Iris Data Set with Keras 04 Aug 2018. In this short article we will take a quick look on how to use Keras with the familiar Iris data set. We will compare networks with the regular Dense layer with different number of nodes and we will employ a Softmax activation function and the Adam optimizer.. Data Preperation FRAMING PARAMETRIC CURVES UNDERSTANDING THE COVARIANCE MATRIX RUNNING A PYTHON SCRIPT IN THE BACKGROUND 19 Oct 2018. This is a quick little guide on how to run a Python script in the background in Linux. First, you need to add a shebang line in the Python script which looks like the following: #!/usr/bin/env python3. This path is necessary if you have multiple versions of Python installed and /usr/bin/env will ensure that thefirst Python
NIKOLAI JANAKIEV
Google Analytics Analytics with Python. Google Analytics is a powerful analytics tool found in an astonishing number of websites. In this tutorial, we will take a look at how to access the Google Analytics API (v4) with Python and Pandas. Additionally, we will take a look at the various ways to analyze your tracking data and create customreports.
UNDERSTANDING THE COVARIANCE MATRIX With the covariance we can calculate entries of the covariance matrix, which is a square matrix given by C i, j = σ ( x i, x j) where C ∈ R d × d and d describes the dimension or number of random variables of the data (e.g. the number of features like height, width, weight, ). Also the covariance matrix is symmetric since σ ( x i, x j MANAGE JUPYTER NOTEBOOK AND JUPYTERLAB WITH SYSTEMD Manage Jupyter Notebook and JupyterLab with Systemd 10 Nov 2020. In this article you will see how to easily manage Jupyter Notebook and JupyterLab by using the Systemd tooling. This is useful when you want to have an instance running local or INSTALLING AND RUNNING JUPYTER ON A SERVER In this tutorial we will be working with Ubuntu 16.04/18.04 servers, but most steps should be fairly similar for Debian 8/9 distributions. We will first go through creating SSH keys, adding a new user on the server, and installing Python and Jupyter with Anaconda. Next, VIDEOS AND GIFS WITH THREE.JS Three.js is a powerful JavaScript library to create 3D computer graphics on the browser using WebGL. Here we’ll see how to create animations and videos from Three.js demos. For this task we will utilize CCapture.js, which is a handy library to capture frames from the canvas. It supports WebM, gifs or images in jpg or png collectedin a tar file.
CREATING SLIDES WITH JUPYTER NOTEBOOK Creating Slides with Jupyter Notebook 06 May 2018. Jupyter notebook is a powerful tool to interactively code in web-based notebooks with a whole plethora of programming languages. With it, it is also possible to create web-based slideshows with reveal.js.. The slides functionality is already included in Jupyter Notebook, so there is no need to install plugins. LOCAL TESTING SERVER WITH PYTHON python3 -m http.server. If you are still using Python 2 you can start the server with: python -m SimpleHTTPServer. The server will run the contents of the directory on localhost and port 8000. If you need another port, you can add the port at the end of the command ( python3 -m http.server 7800 or python -m SimpleHTTPServer 7800 for Python 3 CALCULATE DISTANCE BETWEEN GPS POINTS IN PYTHON Calculate Distance Between GPS Points in Python 09 Mar 2018. When working with GPS, it is sometimes helpful to calculate distances between points.But simple Euclidean distance doesn’t cut it since we have to deal with a sphere, or an oblate spheroid to be exact. So we have to take a look at geodesic distances.. There are various ways to handle this calculation problem. HOW TO MANAGE APACHE AIRFLOW WITH SYSTEMD ON DEBIAN OR How to Manage Apache Airflow with Systemd on Debian or Ubuntu 20 Dec 2019. Apache Airflow is a powerfull workflow management system which you can use to automate and manage complex Extract Transform Load (ETL) pipelines. In this tutorial you will see how to integrate Airflow with the systemd system and service manager which is available on most Linux systems to help you with HOW TO EXECUTE SHELL COMMANDS WITH PYTHON Python is a wonderful language for scripting and automating workflows and it is packed with useful tools out of the box with the Python Standard Library. A common thing to do, especially for a sysadmin, is to execute shell commands. But what usually will end up in a bash or batch file, can be also done in Python. You’ll learn here how to do just that with the os and subprocess modules. CLASSIFYING THE IRIS DATA SET WITH PYTORCH Classifying the Iris Data Set with PyTorch 27 Sep 2020. In this short article we will have a look on how to use PyTorch with the Iris data set. We will create and train a neural network with Linear layers and we will employ a Softmax activation function and the Adam optimizer..Data Preperation
CLASSIFYING THE IRIS DATA SET WITH KERAS Classifying the Iris Data Set with Keras 04 Aug 2018. In this short article we will take a quick look on how to use Keras with the familiar Iris data set. We will compare networks with the regular Dense layer with different number of nodes and we will employ a Softmax activation function and the Adam optimizer.. Data Preperation GOOGLE ANALYTICS ANALYTICS WITH PYTHON Google Analytics is a powerful analytics tool found in an astonishing number of websites. In this tutorial, we will take a look at how to access the Google Analytics API (v4) with Python and Pandas. Additionally, we will take a look at the various ways to analyze your QUERYING S3 OBJECT STORES WITH PRESTO OR TRINO Querying big data on Hadoop can be challenging to get running, but alternatively, many solutions are using S3 object stores which you can access and query with Presto or Trino. In this guide you will see how to install, configure, and run Presto or Trino on Debian or Ubuntu with the S3 object store of your choice and the Hive standalonemetastore.
HOW TO INSTALL PRESTO OR TRINO ON A CLUSTER AND QUERY How to Install Presto or Trino on a Cluster and Query Distributed Data on Apache Hive and HDFS 17 Oct 2020. Presto is an open source distibruted query engine built for Big Data enabling high performance SQL access to a large variety of data sources including HDFS, PostgreSQL, MySQL, Cassandra, MongoDB, Elasticsearch and Kafka among others.. Update 6 Feb 2021: PrestoSQL is now FRAMING PARAMETRIC CURVES INSTALLING AND RUNNING JUPYTER ON A SERVER Jupyter Notebook is a powerful tool, but how can you use it in all its glory on a server? In this tutorial you will see how to set up Jupyter notebook on a server like Digital Ocean, AWS or most other hosting provider available. Additionally, you will see how to use Jupyter notebooks over SSH tunneling or SSL with with Let’s Encrypt. LOCAL TESTING SERVER WITH PYTHON This quick little tutorial shows how to setup a simple local testingserver with Python.
CREATING SLIDES WITH JUPYTER NOTEBOOK Creating Slides with Jupyter Notebook 06 May 2018. Jupyter notebook is a powerful tool to interactively code in web-based notebooks with a whole plethora of programming languages. With it, it is also possible to create web-based slideshows with reveal.js.. The slides functionality is already included in Jupyter Notebook, so there is no need to install plugins. HOW TO MANAGE APACHE AIRFLOW WITH SYSTEMD ON DEBIAN ORSEE MORE ONJANAKIEV.COM
CLASSIFYING THE IRIS DATA SET WITH PYTORCH Classifying the Iris Data Set with PyTorch 27 Sep 2020. In this short article we will have a look on how to use PyTorch with the Iris data set. We will create and train a neural network with Linear layers and we will employ a Softmax activation function and the Adam optimizer..Data Preperation
CLASSIFYING THE IRIS DATA SET WITH KERAS Classifying the Iris Data Set with Keras 04 Aug 2018. In this short article we will take a quick look on how to use Keras with the familiar Iris data set. We will compare networks with the regular Dense layer with different number of nodes and we will employ a Softmax activation function and the Adam optimizer.. Data Preperation GOOGLE ANALYTICS ANALYTICS WITH PYTHON Google Analytics is a powerful analytics tool found in an astonishing number of websites. In this tutorial, we will take a look at how to access the Google Analytics API (v4) with Python and Pandas. Additionally, we will take a look at the various ways to analyze your QUERYING S3 OBJECT STORES WITH PRESTO OR TRINO Querying big data on Hadoop can be challenging to get running, but alternatively, many solutions are using S3 object stores which you can access and query with Presto or Trino. In this guide you will see how to install, configure, and run Presto or Trino on Debian or Ubuntu with the S3 object store of your choice and the Hive standalonemetastore.
HOW TO INSTALL PRESTO OR TRINO ON A CLUSTER AND QUERY How to Install Presto or Trino on a Cluster and Query Distributed Data on Apache Hive and HDFS 17 Oct 2020. Presto is an open source distibruted query engine built for Big Data enabling high performance SQL access to a large variety of data sources including HDFS, PostgreSQL, MySQL, Cassandra, MongoDB, Elasticsearch and Kafka among others.. Update 6 Feb 2021: PrestoSQL is now FRAMING PARAMETRIC CURVES INSTALLING AND RUNNING JUPYTER ON A SERVER Jupyter Notebook is a powerful tool, but how can you use it in all its glory on a server? In this tutorial you will see how to set up Jupyter notebook on a server like Digital Ocean, AWS or most other hosting provider available. Additionally, you will see how to use Jupyter notebooks over SSH tunneling or SSL with with Let’s Encrypt. LOCAL TESTING SERVER WITH PYTHON This quick little tutorial shows how to setup a simple local testingserver with Python.
CREATING SLIDES WITH JUPYTER NOTEBOOK Creating Slides with Jupyter Notebook 06 May 2018. Jupyter notebook is a powerful tool to interactively code in web-based notebooks with a whole plethora of programming languages. With it, it is also possible to create web-based slideshows with reveal.js.. The slides functionality is already included in Jupyter Notebook, so there is no need to install plugins. HOW TO MANAGE APACHE AIRFLOW WITH SYSTEMD ON DEBIAN ORSEE MORE ONJANAKIEV.COM
NIKOLAI JANAKIEV
Querying S3 Object Stores with Presto or Trino 03 Mar 2021. Querying big data on Hadoop can be challenging to get running, but alternatively, many solutions are using S3 object stores which you can access and query with Presto or Trino. ABOUT - PARAMETRIC THOUGHTS About. Hello, I am a freelance data scientist and data engineer. I graduated as a Master of Science in Applied Image and Signal Processing studying in Austria, HOW TO INSTALL PRESTO OR TRINO ON A CLUSTER AND QUERY How to Install Presto or Trino on a Cluster and Query Distributed Data on Apache Hive and HDFS 17 Oct 2020. Presto is an open source distibruted query engine built for Big Data enabling high performance SQL access to a large variety of data sources including HDFS, PostgreSQL, MySQL, Cassandra, MongoDB, Elasticsearch and Kafka among others.. Update 6 Feb 2021: PrestoSQL is now MANAGE JUPYTER NOTEBOOK AND JUPYTERLAB WITH SYSTEMD Manage Jupyter Notebook and JupyterLab with Systemd 10 Nov 2020. In this article you will see how to easily manage Jupyter Notebook and JupyterLab by using the Systemd tooling. This is useful when you want to have an instance running local or UNDERSTANDING THE COVARIANCE MATRIX This article is showing a geometric and intuitive explanation of the covariance matrix and the way it describes the shape of a data set. We will describe the geometric relationship of the covariance matrix with the use of linear transformations and eigen decomposition. LOCAL TESTING SERVER WITH PYTHON This quick little tutorial shows how to setup a simple local testingserver with Python.
HOW TO MANAGE APACHE AIRFLOW WITH SYSTEMD ON DEBIAN OR Apache Airflow is a powerfull workflow management system which you can use to automate and manage complex Extract Transform Load (ETL) pipelines. In this tutorial you will see how to integrate Airflow with the systemd system and service manager which is available on most Linux systems to help you with monitoring and restarting Airflow onfailure.
VIDEOS AND GIFS WITH THREE.JS where the -delay flag specifies the delay of each frame in ticks-per-second and the -loop flag specifies the number of loops, which is with 0 an endless loop. Creating more efficient gifs is covered in this post and creating videos for Instagram is covered inthis post.
COMMAND-LINE IMAGE PROCESSING WITH IMAGEMAGICK There are times being stuck with a load of images that need to be cropped, resized or converted, but doing this by hand in an image editor is tedious work. One tool I commonly use in these desperate situations is ImageMagick, which is a powerful tool when automating raster and vector image processing. Here I’ll introduce a few common commands I had to look up multiple times. HOW TO CREATE YOUR DATA SCIENCE BLOG WITH PELICAN AND Writing articles and tutorials are a great way to learn new things in depth while building a portfolio. In this tutorial, you will find the first steps that you will need to start your data science blog with Pelican and Jupyter Notebooks.NIKOLAI JANAKIEV
Google Analytics Analytics with Python. Google Analytics is a powerful analytics tool found in an astonishing number of websites. In this tutorial, we will take a look at how to access the Google Analytics API (v4) with Python and Pandas. Additionally, we will take a look at the various ways to analyze your tracking data and create customreports.
GOOGLE ANALYTICS ANALYTICS WITH PYTHON Google Analytics is a powerful analytics tool found in an astonishing number of websites. In this tutorial, we will take a look at how to access the Google Analytics API (v4) with Python and Pandas. Additionally, we will take a look at the various ways to analyze your CLASSIFYING THE IRIS DATA SET WITH PYTORCH Classifying the Iris Data Set with PyTorch. 27 Sep 2020. In this short article we will have a look on how to use PyTorch with the Iris data set. We will create and train a neural network with Linear layers and we will employ a Softmax activation function and the Adam optimizer. FRAMING PARAMETRIC CURVES USING VIRTUAL ENVIRONMENTS IN JUPYTER NOTEBOOK AND PYTHON Jupyter Notebook makes sure that the IPython kernel is available, but you have to manually add a kernel with a different version of Python or a virtual environment. First, you need to activate your virtual environment. Next, install ipykernel which provides the IPython kernel for Jupyter: pip install - CLASSIFYING THE IRIS DATA SET WITH KERAS Classifying the Iris Data Set with Keras 04 Aug 2018. In this short article we will take a quick look on how to use Keras with the familiar Iris data set. We will compare networks with the regular Dense layer with different number of nodes and we will employ a Softmax activation function and the Adam optimizer.. Data Preperation OBJECT SERIALIZATION WITH PICKLE AND JSON IN PYTHON Object Serialization with Pickle. Pickle is used for serializing and de-serializing Python objects. This is a great way to store intermediate results while computing things. Pickling and unpickling can be done with the two functions dump () and load () respectively. The only thing you have to take care is that you open the file inbinary mode.
UNDERSTANDING THE COVARIANCE MATRIX RUNNING A PYTHON SCRIPT IN THE BACKGROUND 19 Oct 2018. This is a quick little guide on how to run a Python script in the background in Linux. First, you need to add a shebang line in the Python script which looks like the following: #!/usr/bin/env python3. This path is necessary if you have multiple versions of Python installed and /usr/bin/env will ensure that thefirst Python
HOW TO MANAGE APACHE AIRFLOW WITH SYSTEMD ON DEBIAN ORSEE MORE ONJANAKIEV.COM
NIKOLAI JANAKIEV
Google Analytics Analytics with Python. Google Analytics is a powerful analytics tool found in an astonishing number of websites. In this tutorial, we will take a look at how to access the Google Analytics API (v4) with Python and Pandas. Additionally, we will take a look at the various ways to analyze your tracking data and create customreports.
GOOGLE ANALYTICS ANALYTICS WITH PYTHON Google Analytics is a powerful analytics tool found in an astonishing number of websites. In this tutorial, we will take a look at how to access the Google Analytics API (v4) with Python and Pandas. Additionally, we will take a look at the various ways to analyze your CLASSIFYING THE IRIS DATA SET WITH PYTORCH Classifying the Iris Data Set with PyTorch. 27 Sep 2020. In this short article we will have a look on how to use PyTorch with the Iris data set. We will create and train a neural network with Linear layers and we will employ a Softmax activation function and the Adam optimizer. FRAMING PARAMETRIC CURVES USING VIRTUAL ENVIRONMENTS IN JUPYTER NOTEBOOK AND PYTHON Jupyter Notebook makes sure that the IPython kernel is available, but you have to manually add a kernel with a different version of Python or a virtual environment. First, you need to activate your virtual environment. Next, install ipykernel which provides the IPython kernel for Jupyter: pip install - CLASSIFYING THE IRIS DATA SET WITH KERAS Classifying the Iris Data Set with Keras 04 Aug 2018. In this short article we will take a quick look on how to use Keras with the familiar Iris data set. We will compare networks with the regular Dense layer with different number of nodes and we will employ a Softmax activation function and the Adam optimizer.. Data Preperation OBJECT SERIALIZATION WITH PICKLE AND JSON IN PYTHON Object Serialization with Pickle. Pickle is used for serializing and de-serializing Python objects. This is a great way to store intermediate results while computing things. Pickling and unpickling can be done with the two functions dump () and load () respectively. The only thing you have to take care is that you open the file inbinary mode.
UNDERSTANDING THE COVARIANCE MATRIX RUNNING A PYTHON SCRIPT IN THE BACKGROUND 19 Oct 2018. This is a quick little guide on how to run a Python script in the background in Linux. First, you need to add a shebang line in the Python script which looks like the following: #!/usr/bin/env python3. This path is necessary if you have multiple versions of Python installed and /usr/bin/env will ensure that thefirst Python
HOW TO MANAGE APACHE AIRFLOW WITH SYSTEMD ON DEBIAN ORSEE MORE ONJANAKIEV.COM
NIKOLAI JANAKIEV
Google Analytics Analytics with Python. Google Analytics is a powerful analytics tool found in an astonishing number of websites. In this tutorial, we will take a look at how to access the Google Analytics API (v4) with Python and Pandas. Additionally, we will take a look at the various ways to analyze your tracking data and create customreports.
ABOUT - PARAMETRIC THOUGHTS Services. I offer various freelancing services and project collaborations in the fields of: Building Data Pipelines. Data Science and Analytics. Geospatial Processing and Data Analytics. Python Software Development. Data Visualizations and Reporting. Building Dashboards. Previous clients include Roam Coliving (NY, USA) where Iwas building the
HOW TO INSTALL PRESTO OR TRINO ON A CLUSTER AND QUERY How to Install Presto or Trino on a Cluster and Query Distributed Data on Apache Hive and HDFS 17 Oct 2020. Presto is an open source distibruted query engine built for Big Data enabling high performance SQL access to a large variety of data sources including HDFS, PostgreSQL, MySQL, Cassandra, MongoDB, Elasticsearch and Kafka among others.. Update 6 Feb 2021: PrestoSQL is now UNDERSTANDING THE COVARIANCE MATRIX With the covariance we can calculate entries of the covariance matrix, which is a square matrix given by C i, j = σ ( x i, x j) where C ∈ R d × d and d describes the dimension or number of random variables of the data (e.g. the number of features like height, width, weight, ). Also the covariance matrix is symmetric since σ ( x i, x j QUERYING S3 OBJECT STORES WITH PRESTO OR TRINO Querying big data on Hadoop can be challenging to get running, but alternatively, many solutions are using S3 object stores which you can access and query with Presto or Trino. In this guide you will see how to install, configure, and run Presto or Trino on Debian or Ubuntu with the S3 object store of your choice and the Hive standalonemetastore.
LOCAL TESTING SERVER WITH PYTHON python3 -m http.server. If you are still using Python 2 you can start the server with: python -m SimpleHTTPServer. The server will run the contents of the directory on localhost and port 8000. If you need another port, you can add the port at the end of the command ( python3 -m http.server 7800 or python -m SimpleHTTPServer 7800 for Python 3 INSTALLING AND RUNNING JUPYTER ON A SERVER In this tutorial we will be working with Ubuntu 16.04/18.04 servers, but most steps should be fairly similar for Debian 8/9 distributions. We will first go through creating SSH keys, adding a new user on the server, and installing Python and Jupyter with Anaconda. Next, CREATING SLIDES WITH JUPYTER NOTEBOOK Creating Slides with Jupyter Notebook 06 May 2018. Jupyter notebook is a powerful tool to interactively code in web-based notebooks with a whole plethora of programming languages. With it, it is also possible to create web-based slideshows with reveal.js.. The slides functionality is already included in Jupyter Notebook, so there is no need to install plugins. CALCULATE DISTANCE BETWEEN GPS POINTS IN PYTHON Calculate Distance Between GPS Points in Python 09 Mar 2018. When working with GPS, it is sometimes helpful to calculate distances between points.But simple Euclidean distance doesn’t cut it since we have to deal with a sphere, or an oblate spheroid to be exact. So we have to take a look at geodesic distances.. There are various ways to handle this calculation problem. HOW TO EXECUTE SHELL COMMANDS WITH PYTHON Python is a wonderful language for scripting and automating workflows and it is packed with useful tools out of the box with the Python Standard Library. A common thing to do, especially for a sysadmin, is to execute shell commands. But what usually will end up in a bash or batch file, can be also done in Python. You’ll learn here how to do just that with the os and subprocess modules. This website uses cookies to ensure you get the best experience on ourwebsite. Learn more
Got it!
Nikolai Janakiev
Freelance Data Scientist // MSc Applied Image and Signal Processing // Data Science / Data Visualization / GIS / Geometric Modelling Home About Notebooks Today I LearnedGithub Twitter
Imprint
Privacy Policy
© 2019. All rights reserved.PARAMETRIC THOUGHTS
INSTALLING AND RUNNING JUPYTER NOTEBOOKS ON A SERVER12 Feb 2019
Jupyter Notebook is a powerful tool, but how can you use it in all its glory on a server? In this tutorial you will see how to set up Jupyter notebook on a server like Digital Ocean, AWS or
most other hosting provider available. Additionally, you will see how to use Jupyter notebooks over SSH tunneling or SSL with with Let’sEncrypt .
ANALYZING YOUR FILE SYSTEM AND FOLDER STRUCTURES WITH PYTHON23 Jan 2019
Say you have an external hard drive with layers upon layers of cryptically named folders and intricate mazes of directories (like here , or here ). How can you make sense of this mess? Python offers various tools in the Python standard library to deal with your file system and the folderstats module can be of additional help to gain insights into your file system. WHERE DO MAYORS COME FROM: QUERYING WIKIDATA WITH PYTHON AND SPARQL01 Aug 2018
In this article, we will be going through building queries for Wikidata with Python and SPARQL by taking a look where mayors in Europe are born. This tutorial is building up the knowledge to collect the data responsible for this interactive visualization from the header image which was done with deck.gl . COMPARE COUNTRIES AND CITIES WITH OPENSTREETMAP AND T-SNE05 Jun 2018
There are many ways to compare countries and cities and many measurements to choose from. We can see how they perform economically, or how their demographics differ, but what if we take a look at data available in OpenStreetMap ? In this article, we explore just that with the help of a procedure calledt-SNE.
PREDICT ECONOMIC INDICATORS WITH OPENSTREETMAP15 May 2018
OpenStreetMap (OSM) is a massive collaborative map of the world, built and maintained mostly by volunteers. On the other hand, there exist various indicators to measure economic growth, prosperity, and produce of a country. What if we use OpenStreetMap to predict those economic indicators? WORKING WITH MULTIINDEX AND PIVOT TABLES IN PANDAS AND PYTHON22 Apr 2018
Here we’ll take a look at how to work with MultiIndex or also called Hierarchical Indexes in Pandas and Python on real world data. Hierarchical indexing enables you to work with higher dimensional data all while using the regular two-dimensional DataFrames or one-dimensional Series in Pandas. WORKING WITH PANDAS GROUPBY IN PYTHON AND THE SPLIT-APPLY-COMBINESTRATEGY
18 Mar 2018
In this tutorial we will cover how to use the Pandas DataFrame groupby function while having an excursion to the Split-Apply-Combine Strategy for data analysis. The Split-Apply-Combine strategy is a process that can be described as a process of _splitting_ the data into groups, _applying_ a function to each group and _combining_ the result into a final data structure. CALCULATE DISTANCE BETWEEN GPS POINTS IN PYTHON09 Mar 2018
When working with GPS, it is
sometimes helpful to calculate distances between points. But simpleEuclidean distance
doesn’t cut it since we have to deal with a sphere, or an oblate spheroid to be exact. So we have to take a look at geodesicdistances.
LOADING DATA FROM OPENSTREETMAP WITH PYTHON AND THE OVERPASS API04 Mar 2018
Have you ever wondered where most Biergarten in Germany are or how many banks are hidden in Switzerland? OpenStreetMap is a great open source map of the world which can give us some insight into these and similar questions. There is a lot of data hidden in this data set, full of useful labels and geographic information, but how do we get our hands on the data? COMMAND-LINE IMAGE PROCESSING WITH IMAGEMAGICK06 Dec 2017
There are times being stuck with a load of images that need to be cropped, resized or converted, but doing this by hand in an image editor is tedious work. One tool I commonly use in these desperate situations is ImageMagick , which is a powerful tool when automating raster and vector image processing. Here I’ll introduce a few common commands I had to look up multiple times. BATCH GEOCODING WITH PYTHON29 Nov 2017
You have a list of addresses, but you need to get GPS coordinates to crunch some numbers. Don’t despair, there is geocoding for this and Python provides some simple means to help to deal with the APIs out there. FRAMING PARAMETRIC CURVES30 Jun 2017
This article explores an efficient way on how to create tubes, ribbons and moving camera orientations based on parametric curves with the help of moving coordinate frames. THREE WAYS TO GET MOST OF YOUR CSV IN PYTHON24 Jun 2017
One of the crucial tasks when working with data is to load data properly. The common way the data is formated is CSV, which comes in
different flavors and varying difficulties to parse. This article shows three common approaches in Python. WORKING WITH TIME AND TIME ZONES IN PYTHON07 Jun 2017
Time conversions can be tedious, but Python offers some relief for the frustration. Here are some quick recipes which are quite useful whenjuggling with time.
UNDERSTANDING THE COVARIANCE MATRIX02 Mar 2017
This article is showing a geometric and intuitive explanation of the covariance matrix and the way it describes the shape of a data set. We will describe the geometric relationship of the covariance matrix with the use of linear transformations and eigendecomposition. About Notebooks Today I LearnedDetails
Copyright © 2024 ArchiveBay.com. All rights reserved. Terms of Use | Privacy Policy | DMCA | 2021 | Feedback | Advertising | RSS 2.0