TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data.
Currently there are a lot of different solutions to serve ML models in production with the growth that MLOps is having nowadays as the standard procedure to work with ML models during all their lifecycle. …
TorchServe is the ML model serving framework developed by PyTorch.
Currently there are a lot of different solutions to serve ML models in production with the growth that MLOps is having nowadays as the standard procedure to work with ML models during all their lifecycle. Maybe the most popular one is TensorFlow Serving developed by TensorFlow so as to server their models in production environments, so that TorchServe was created after it, in order to allow PyTorch users to easily serve their models too.
Kaggle is an online community of data scientists owned by Google and the most relevant factors that led Kaggle to be the world’s largest community of data scientists are both the competitions that encourage users to solve complex data science and machine learning related projects and the dataset hosting that it provides.
In this post I will be focusing on datasets and how can they be created via web scraping with Python so that later we can contribute to the Kaggle community uploading them for research purposes.
Web Scraping involves fetching data from a web and extracting it, so the…
As we know, a company profile is a professional introduction of the business and aims to inform the audience about its products and services, so it has a huge relevance when it comes to company classification for its further analysis, as defined in this Udemy Blog.
As described in Wikipedia, data extraction is the act or process of retrieving data out of (usually unstructured or poorly structured) data sources for further data processing or data storage (data migration). The import into the intermediate extracting system is thus usually followed by data transformation and possibly the addition of metadata prior to export to another stage in the data workflow.
Baumgartner  defines a web data extraction system as “a software extracting, automatically and repeatedly data from Web pages with changing contents and the delivers extracted data to a database or some other application”.
I decided to create investpy due to the needs of my Final Degree Project (TFG, in spanish) on Computer Engineering at the University of Salamanca (USAL), titled “Machine Learning for stock investment recommendation systems”. I also found out there were no Python packages for historical data extraction from the spanish stock market, so I thought it could be useful to publish my work so everyone can use it.
investpy is a Python package for historical data extraction from equities, funds and etfs from the continuous spanish market. It is based on Web Scraping and HTML Parsing in order to retrieve…