metadata

stacodelists: use standard, language-independent variable codes to help international data interoperability and machine reuse in R

A new building block of our Green Deal Data Observatory went through code peer review and was released yesterday. The statcodelists R package aim to promote the reuse and exchange of statistical information and related metadata with making the internationally standardized SDMX code lists available for the R user.

A reprezentative sample of n=100793 from 5 years on the most serious global problem. Get the tidy dataset from our repository or API. Subject Numbrer Datasets Environmental Protection 2/2 Government Budget Allocations for R&D in Environment Environment’s Share in Total Government Budget Allocations for R&D Climate change 1/1 Most Serious Global Problem: Climate Change (Percentage of European Individuals) For Climate change subject, as recognized by the Library of Congress Subject Headings, with variations Changes, Climatic, Changes in climate, Climate change, Climate change science, Climate changes, Climate variations, Climatic change, Climatic changes–Environmental aspects, Climatic fluctuations, Climatic variations, Global climate changes, Global climatic changes, we find 265 datasets, but after close inspection, only 25 are tabular data in csv or xlsx files.

How We Add Value to Public Data With Imputation and Forecasting?

Public data sources are often plagued with missng values. Naively you may think that you can ignore them, but think twice: in most cases, missing data in a table is not missing information, but rather malformatted information which will destroy your beautiful visualization or stop your application from working. In this example we show how we increase the usable subset of a public dataset by 66.7%, rendering useful what would otherwise have been a deal-breaker in panel regressions or machine learning applications.

How We Add Value to Public Data With Better Curation And Documentation?

Many people ask if we can really add value to free data that can be downloaded from the Internet by anybody. We do not only work with easy-to-download data, but we know that free, public data usually requires a lot of work to become really valuable. To start with, it is not always easy to find.

The Data Sisyphus

Sisyphus was punished by being forced to roll an immense boulder up a hill only for it to roll down every time it neared the top, repeating this action for eternity. When was a file downloaded from the internet? What happened with it sense? Are their updates? Did the bibliographical reference was made for quotations? Missing values imputed? Currency translated? Who knows about it – who created a dataset, who contributed to it? Which is the final, checked, approved by a senior manager?

Metadata

Adding metadata exponentially increases the value of data. Did somebody already adjust old data to conform to constantly changing geographic boundaries? What are some practical ways of combining satellite sensory data with my organization's records? And do I have the right to do so? Metadata logs the history of data, providing instructions on how to reuse it, also setting the terms of use. We automate this labor-intensive process applying the FAIR data concept.

Developing an Open API is the Right Direction

I believe that with curators' priorities and the development of an easily accessible, open API, we are moving in the right direction.

Metadata

Uncut diamonds need to be cut, polished, and you have to make sure that they come from a legal source.